text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5715–5725 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5715 Not All Claims are Created Equal: Choosing the Right Statistical Approach to Assess Hypotheses Erfan Sadeqi Azer1 Daniel Khashabi2∗Ashish Sabharwal2 Dan Roth3 1Indiana University 2Allen Institute for Artificial Intelligence 3University of Pennsylvania [email protected] {danielk,ashishs}@allenai.org [email protected] Abstract Empirical research in Natural Language Processing (NLP) has adopted a narrow set of principles for assessing hypotheses, relying mainly on p-value computation, which suffers from several known issues. While alternative proposals have been well-debated and adopted in other fields, they remain rarely discussed or used within the NLP community. We address this gap by contrasting various hypothesis assessment techniques, especially those not commonly used in the field (such as evaluations based on Bayesian inference). Since these statistical techniques differ in the hypotheses they can support, we argue that practitioners should first decide their target hypothesis before choosing an assessment method. This is crucial because common fallacies, misconceptions, and misinterpretation surrounding hypothesis assessment methods often stem from a discrepancy between what one would like to claim versus what the method used actually assesses. Our survey reveals that these issues are omnipresent in the NLP research community. As a step forward, we provide best practices and guidelines tailored towards NLP research, as well as an easy-to-use package called HyBayes for Bayesian assessment of hypotheses,1 complementing existing tools. 1 Introduction Empirical fields, such as Natural Language Processing (NLP), must follow scientific principles for assessing hypotheses and drawing conclusions from experiments. For instance, suppose we come across the results in Table 1, summarizing the accuracy of two question-answering (QA) systems S1 and S2 on some datasets. What is the correct way to interpret this empirical observation in terms of ∗Work done while the second author was affiliated with the University of Pennsylvania. 1https://github.com/allenai/HyBayes System Description ARC-easy ARC-challenge ID #Correct Acc. #Correct Acc. S1 BERT 1721 72.4 566 48.3 S2 Reading Strategies 1637 68.9 496 42.3 Table 1: Performance of two systems (Devlin et al., 2019; Sun et al., 2018) on the ARC question-answering dataset (Clark et al., 2018). ARC-easy & ARCchallenge have 2376 & 1172 instances, respectively. Acc.: accuracy as a percentage. the superiority of one system over another? While S1 has higher accuracy than S2 in both cases, the gap is moderate and the datasets are of limited size. Can this apparent difference in performance be explained simply by random chance, or do we have sufficient evidence to conclude that S1 is in fact inherently different (in particular, inherently stronger) than S2 on these datasets? If the latter, can we quantify this gap in inherent strength while accounting for random fluctuation? Such fundamental questions arise in one form or another in every empirical NLP effort. Researchers often wish to draw conclusions such as: (Ca) I’m 95% confident that S1 and S2 are inherently different, in the sense that if they were inherently identical, it would be highly unlikely to witness the observed 3.5% empirical gap for ARC-easy. (Cb) With probability at least 95%, the inherent accuracy of S1 exceeds that of S2 by at least 1% for ARC-easy. These two conclusions differ in two respects. First, Ca claims the two systems are inherently different, while Cb goes further to claim a margin of at least 1% between their inherent accuracies. The second, more subtle difference lies in the interpretation of the 95% figure: the 95% confidence expressed in Ca is in terms of the space of empirical observations we could have made, given some underlying truth about how the inherent accuracies of S1 and S2 relate; while the 95% probability expressed in Cb is directly over the space of possible 5716 inherent accuracies of the two systems. To support such a claim, one must turn it into a proper mathematical statement that can be validated using a statistical calculation. This in turn brings in additional choices: we can make at least four statistically distinct hypotheses here, each supported by a different statistical evaluation: (H1) Assuming S1 and S2 have inherently identical accuracy, the probability (p-value) of making a hypothetical observation with an accuracy gap at least as large as the empirical observation (here, 3.5%) is at most 5% (making us 95% confident that the above assumption is false). (H2) Assuming S1 and S2 have inherently identical accuracy, the empirical accuracy gap (here, 3.5%) is larger than the maximum possible gap (confidence interval) that could hypothetically be observed with a probability of over 5% (making us 95% confident that the above assumption is false). (H3) Assume a prior belief (a probability distribution) w.r.t. the inherent accuracy of typical systems. Given the empirically observed accuracies, the probability (posterior interval) that the inherent accuracy of S1 exceeds that of S2 by a margin of 1% is at least 95%. (H4) Assume a prior belief (a probability distribution) w.r.t. the inherent accuracies of typical systems. Given the empirically observed accuracies, the odds increase by a factor of 1.32 (Bayes factor) in favor of the hypothesis that the inherent accuracy of S1 exceeds that of S2 by a margin of 1%. As this illustrates, there are multiple ways to formulate empirical hypotheses and support empirical claims. Since each hypothesis starts with a different assumption and makes a (mathematically) different claim, it can only be tested with a certain set of statistical methods. Therefore, NLP practitioners ought to define their target hypothesis before choosing an assessment method. The most common statistical methodology used in NLP is null-hypothesis significance testing (NHST) which uses p-values (Søgaard et al., 2014; Koehn, 2004; Dror and Reichart, 2018). Hypotheses H1&H2 can be tested with p-value-based methods, which include confidence intervals and operate over the probability space of observations2 (§2.1 and §2.2). On the other hand, there are often overlooked approaches, based on Bayesian inference (Kruschke and Liddell, 2018), that can be used to assess hypotheses H3&H4 (§2.3 and §2.4) and have two broad strengths: they can deal more naturally with accuracy margins and they operate directly over the probability space of inherent accuracy (rather than of observations). For each technique reviewed in this work, we 2More precisely, over the probability space of an aggregation function over observations, called test statistics. discuss how it compares with alternatives and summarize common misinterpretations surrounding it (§3). For example, a common misconception about p-value is that it represents a probability of the validity of a hypothesis. While desirable, p-values in fact do not provide such a probabilistic interpretation (§3.2). It is instead through a Bayesian analysis of the posterior distribution of the test statistic (inherent accuracy in the earlier example) that one can make claims about the probability space of that statistic, such as H3. We quantify and demonstrate related common malpractices in the field through a manual annotation of 439 ACL-2018 conference papers,3 and a survey filled out by 55 NLP researchers (§4). We highlight surprising findings from the survey, such as the following: While 86% expressed fairto-complete confidence in the interpretation of pvalues, only a small percentage of them correctly answered a basic p-value interpretation question. Contributions. This work seeks to inform the NLP community about crucial distinctions between various statistical hypotheses and their corresponding assessment methods, helping move the community towards well-substantiated empirical claims and conclusions. Our exposition covers a broader range of methods (§2) than those included in recent related efforts (§1.1), and highlights that these methods achieve different goals. Our surveys of NLP researchers reveals problematic trends (§4), emphasizing the need for increased scrutiny and clarity. We conclude by suggesting guidelines for better testing (§5), as well as providing a toolkit called HyBayes (cf. Footnote 1) tailored towards commonly used NLP metrics. We hope this work will encourage an improved understanding of statistical assessment methods and effective reporting practices with measures of uncertainty. 1.1 Related Work While there is an abundant discussion of significance testing in other fields, only a handful of NLP efforts address it. For instance, Chinchor (1992) defined the principles of using hypothesis testing in the context of NLP problems. Mostnotably, there are works studying various randomized tests (Koehn, 2004; Ojala and Garriga, 2010; Graham et al., 2014), or metric-specific tests (Evert, 2004). More recently, Dror et al. (2018) and Dror and Reichart (2018) provide a thorough review of 3https://www.aclweb.org/anthology/events/acl-2018/ 5717 frequentist tests. While an important step in better informing the community, it covers a subset of statistical tools. Our work complements this effort by pointing out alternative tests. With increasing over-reliance on certain hypothesis testing techniques, there are growing troubling trends of misuse or misinterpretation of such techniques (Goodman, 2008; Demˇsar, 2008). Some communities, such as statistics and psychology, even have published guidelines and restrictions on the use of p-values (Trafimow and Marks, 2015; Wasserstein et al., 2016). In parallel, some authors have advocated for using alternate paradigms such as Bayesian evaluations (Kruschke, 2010). NLP is arguably an equally empirical field, yet with a rare discussion of proper practices of scientific testing, common pitfalls, and various alternatives. In particular, while limitations of p-values are heavily discussed in statistics and psychology, only a few NLP efforts approach them: over-estimation of significance by model-based tests (Riezler and Maxwell, 2005), lack of independence assumption in practice (Berg-Kirkpatrick et al., 2012), and sensitivity to the choice of the significance level (Søgaard et al., 2014). Our goal is to provide a unifying view of the pitfalls and best practices, and equip NLP researchers with Bayesian hypothesis assessment approaches as an important alternative tool in their toolkit. 2 Assessment of Hypotheses We often wish to draw qualitative inferences based on the outcome of experiments (for example, inferring the relative inherent performance of systems). To do so, we usually formulate a hypothesis that can be assessed through some analysis. Suppose we want to compare two systems on a dataset of instances x = [x1, . . . , xn] with respect to a measure M(S, x) representing the performance of a system S on an instance x. Let M(S, x) denote the vector [M(S, xi)]n i=1. Given systems S1, S2, define y ≜[M(S1, x), M(S2, x)] as a vector of observations.4 In a typical NLP experiment, the goal is to infer some inherent and unknown properties of systems. To this end, a practitioner assumes a probability distribution on the observations y, parameterized 4For simplicity of exposition, we assume the performances of two systems are on a single dataset. However, the discussion also applies to observations on multiple different datasets. Figure 1: Progression of steps taken during a scientific assessment of claims from empirical observations. by θ, the properties of the systems. In other words, y is assumed to have a distribution5 with unknown parameters θ. In this setting, a hypothesis H is a condition on θ. Hypothesis assessment is a way of evaluating the degree to which the observations y are compatible with H. The overall process is depicted in Figure 1. Following our running example, we use the task of answering natural language questions (Clark et al., 2018). While our examples are shown for this particular task, all the ideas are applicable to more general experimental settings. For this task, the performance metric M(S, x) is defined as a binary function indicating whether a system S answers a given question x correctly or not. The performance vector M(S, x) captures the system’s accuracy on the entire dataset (cf. Table 1). We assume that each system Si has an unknown inherent accuracy value, denoted θi. Let θ = [θ1, θ2] denote the unknown inherent accuracy of two systems. In this setup, one might, for instance, be interested in assessing the credibility of the hypothesis H that θ1 < θ2. Table 2 shows a categorization of statistical tools developed for the assessment of such hypotheses. The two tools on the left are based on frequentist statistics, while the ones on the right are based on Bayesian inference (Kruschke and Liddell, 2018). A complementary categorization of these tools is based on the nature of the results that they provide: the ones on the top encourage binary decision mak5Parametric tests assume this distribution, while nonparametric tests do not. 5718 Table 2: Various classes of methods for statistical assessment of hypotheses. ing, while those on the bottom provide uncertainty around estimates. We discuss all four classes of tests in the following sub-sections. 2.1 Null-Hypothesis Significance Testing In frequentist hypothesis testing, there is an asymmetric relationship between two hypotheses. The hypothesis formulated to be rejected is usually called the null-hypothesis H0. For instance, in our example H0: θ1 = θ2. A decision procedure is devised by which, depending on y, the null-hypothesis will either be rejected in favor of H1, or the test will stay undecided. A key notion here is p-value, the probability, under the null-hypothesis H0, of observing an outcome at least equal to or extreme than the empirical observations y. To apply this notion on a set of observations y, one has to define a function that maps y to a numerical value. This function is called the test statistic δ(.) and it formalizes the interpretation of extremeness. Concretely, p-value is defined as, P(δ(Y ) ≥δ(y)|H0) (1) In this notation, Y is a random variable over possible observations and δ(y) is the empirically observed value of the test statistic. A large p-value implies that the data could easily have been observed under the null-hypothesis. Therefore, a lower p-value is used as evidence towards rejecting the null-hypothesis. Example 1 (Assessment of H1) We form a null-hypothesis using the accuracy of the two systems (Table 1) using a one-sided z-test a with δ(y) ≜ (1/n) Pn i=1 [M(S1, xi) −M(S2, xi)] . We formulate a null-hypothesis against the claim of S1 having strictly better accuracy than S2. This results in a p-value of 0.0037 (details in §A.1) and can be interpreted as the following: if the systems have inherently identical accuracy values, the probability of observing a superiority at least as extreme as our observations is 0.0037. For a significance level of 0.05 (picked before the test) this p-value is small enough to reject the null-hypothesis. aThe choice of this test is based on an implicit assumption that two events corresponding to answering two distinct questions, are independent with identical probability, i.e., equal to the inherent accuracy of the system. Hence, the number of correct answers follows a binomial distribution. Since, the total number of questions is large, i.e., 2376 in ARC-easy, this distribution can be approximated with a normal distribution. It is possible to use other tests with less restrictive assumptions (see Dror et al. (2018)), but for the sake of simplicity we use this test to illustrate core ideas of “p-value” analysis. This family of the tests is thus far the most widely used tool in NLP research. Each variant of this test is based on some assumptions about the distribution of the observations, under the nullhypothesis, and an appropriate definition of the test statistics δ(.). Since a complete exposition of such tests is outside the scope of this work, we encourage interested readers to refer to the existing reviews, such as Dror et al. (2018). 2.2 Confidence Intervals Confidence Intervals (CIs) are used to express the uncertainty of estimated parameters. In particular, the 95% CI is the range of values for parameter θ such that the corresponding test based on p-value is not rejected: P(δ(Y ) ≥δ(y)|H0(θ)) ≥0.05. (2) In other words, the confidence interval merely asks which values of the parameter θ could be used, before the test is rejected. Example 2 (Assessment of H2) Consider the same setting as in Example 1. According to Table 1, the estimated value of the accuracy differences (maximum-likelihood estimates) is θ1 −θ2 = 0.035. A 95% CI of this quantity provides a range of values that are not rejected under the corresponding null-hypothesis. In particular, a 95% CI gives θ1 −θ2 ∈[0.0136, 0.057] (details in §A.2). The blue bar in Figure 2 (right) shows the corresponding CI. Notice that the conclusion of Example 1 is compatible with this CI; the null-hypothesis θ1 = θ2 which got rejected is not included in the CI. 5719 2.3 Posterior Intervals Bayesian methods focus on prior and posterior distributions of θ. Recall that in a typical NLP experiment, these parameters can be, e.g., the actual mean or standard deviation for the performance of a system, as its inherent and unobserved property. In Bayesian inference frameworks, a priori assumptions and beliefs are encoded in the form of a prior distribution P(θ) on parameters of the model.6 In other words, a prior distribution describes the common belief about the parameters of the model. It also implies a distribution over possible observations. For assessing hypotheses H3 and H4 in our running example, we will simply use the uniform prior, i.e., the inherent accuracy is uniformly distributed over [0, 1]. This corresponds to having no prior belief about how high or low the inherent accuracy of a typical QA system may be. In general, the choice of this prior can be viewed as a compromise between the beliefs of the analyzer and those of the audience. The above uniform prior, which is equivalent to the Beta(1,1) distribution, is completely non-committal and thus best suited for a broad audience who has no reason to believe an inherent accuracy of 0.8 is more likely than 0.3. For a moderately informed audience that already believes the inherent accuracy is likely to be widely distributed but centered around 0.67, the analyzer may use a Beta(3,1.5) prior to evaluate a hypothesis. Similarly, for an audience that already believes the inherent accuracy to be highly peaked around 0.75, the analyzer may want to use a Beta(9,3) prior. Formally, one incorporates θ in a hierarchical model in the form of a likelihood function P(y|θ). This explicitly models the underlying process that connects the latent parameters to the observations. Consequently, a posterior distribution is inferred using the Bayes rule and conditioned on the observations: P(θ|y) = P(y|θ)P(θ) P(y) . The posterior distribution is a combined summary of the data and prior information, about likely values of θ. The mode of the posterior (maximum a posteriori) can be seen as an estimate for θ. Additionally, the posterior can be used to describe the uncertainty around the mode. While the posterior distribution can be analytically calculated for simple models, it is not so straightforward for general models. Fortunately, 6We use P(x) in its most general form, to denote the Probability Mass Function for discrete variables and the Probability Density Function for continuous variables. recent advances in hardware, Markov Chain Monte Carlo (MCMC) techniques (Metropolis et al., 1953; Gamerman and Lopes, 2006), and probabilistic programming7 allow sufficiently-accurate numerical approximations of posteriors. One way to summarize the uncertainty around the point estimate of parameters is by marking the span of values that cover α% of the mostcredible density in the posterior distribution (e.g., α = 95%). This is called Highest Density Intervals (HDIs) or Bayesian Confidence Intervals (Oliphant, 2006) (not to be confused with CI, in §2.2). Recall that a hypothesis H is a condition on θ (see Figure 1). Therefore, given the posterior P(θ|y), one can calculate the probability of H, as a probabilistic event, conditioned on y: P(H|y). For example in an unpaired t-test, H0 is the event that the means of two groups are equal. Bayesian statisticians usually relax this strict equality θ1 = θ2 and instead evaluate the credibility of |θ1 −θ2| < ε for some small value of ε. The intuition is that when θ1 and θ2 are close enough they are practically equivalent. This motivates the definition of Region Of Practical Equivalence (ROPE): An interval around zero with “negligible” radius. The boundaries of ROPE depend on the application, the meaning of the parameters and its audience. In our running example, a radius of one percent for ROPE implies that improvements less than 1 percent are not considered notable. For a discussion on setting ROPE see Kruschke (2018). These concepts give researchers the flexibility to define and assess a wide range of hypotheses. For instance, we can address H3 (from Introduction) and its different variations that can be of interest depending on the application. The analysis of H3 is depicted in Figure 2 and explained next.8 Example 3 (Assessment of H3) Recall the setting from previous examples. The left panel of Figure 2 shows the prior on the latent accuracy of the systems and their differences (further details on the hierarchical model in §A.3.) We then obtain the posterior distribution (Figure 2, right), in this case via numerical methods). Notice that one can read the following conclusion: with probability 0.996, the hypothe7Pymc3 (in Python) and JAGS & STAN (in R) are among the commonly-used packages for this purpose. 8Figure 2 can be readily reproduced via the accompanying software, HyBayes. 5720 θ1 θ1 - θ2 θ2 θ2 θ1 - θ2 ROPE HDI HDI HDI θ1 0.0612 0.00939 Figure 2: Left: Prior distributions of two systems (bottom row) and their difference (top row). Right: Posterior distributions of two systems (bottom row) and their difference (top row) after observing the performances on ARCeasy dataset. Note the posterior HDI estimate, (0.00939, 0.0612). Here we assume at least one percent accuracy difference to be considered practically different. Hence, we indicate the interval (−0.01, 0.01) as ROPE (§2.3.) sis H3 (with a margin of 0%) holds true. As explained in §C.2, this statement does not imply any difference with a notable margin. In fact, the posterior in Figure 2 implies that this experiment is not sufficient to claim the following: with probability at least 0.95, hypothesis H3 (with a margin of 1%) holds true. This is the case since ROPE (0.01, 0.01) overlaps with 95% HDI (0.00939, 0.0612). 2.4 Bayes Factor A common tool among Bayesian frameworks is the notion of Bayes Factor.9 Intuitively, it compares how the observations y shift the credibility from prior to posterior of the two competing hypothesis: BF 01 = P(H0|y) P(H1|y) , P(H0) P(H1) If the BF 01 equals to 1 then the data provide equal support for the two hypotheses and there is no reason to change our a priori opinion about the relative likelihood of the two hypotheses. A smaller Bayes Factor is an indication of rejecting the nullhypothesis H0. If it is greater than 1 then there is support for the null-hypothesis and we should infer that the odds are in favor of H0. Notice that the symmetric nature of Bayes Factor allows all the three outcomes of “accept”, “reject”, and “undecided,” as opposed to the definition of p-value that cannot accept a hypothesis. 9“Bayesian Hypothesis Testing” usually refers to the arguments based on “Bayes Factor.” However, as shown in §2.3, there are other Bayesian approaches for assessing hypotheses. Example 4 (Assessment of H4) Here we want to assess the null-hypothesis H0: |θ1 −θ2| < 0.01 against H1: |θ1 −θ2| ≥0.01 (x = 0.01). Substituting posterior and prior values, one obtains: BF 01 = 0.027 0.980 , 0.019 0.972 = 1.382 . This value is very close to 1 which means that this observation does not change our prior belief about the two systems difference. 3 Comparisons Many aspects influence the choice of an approach to assess significance of hypotheses. This section provides a comparative summary, with details in Appendix C and an overall summary in Table 3. 3.1 Susceptibility to Misinterpretation The complexity of interpreting significance tests combined with insufficient reporting could result in ambiguous or misleading conclusions. This ambiguity can not only confuse authors but also cause confusion among readers of the papers. While p-values (§2.1) are the most common approach, they are inherently complex, which makes them easier to misinterpret (see examples in §C.1). Interpretation of confidence intervals (§2.2) can also be challenging since it is an extension of pvalue (Hoekstra et al., 2014). Approaches that provide measures of uncertainty directly in the hypothesis space (like the ones in §2.3) are often more 5721 Method Paradigm Ease of interpretation (1 =easy) (§3.1) Encourages binary-thinking (3.2) Depends on stopping intention (3.3) Dependence on prior (3.4) Decision rule # of papers using this test in ACL’18 (§2.1) p-value frequentist 3 Yes Yes No Acceptable p-value 73 (§2.2) CI frequentist 4 No Yes No Acceptable confidence margin 6 (§2.3) HDI Bayesian 1 No No Not sensitive but takes it into account HDI relative to ROPE 0 (§2.4) BF Bayesian 2 Yes No Highly sensitive Acceptable BF 0 Table 3: A comparison of different statistical methods for evaluating the credibility of a hypothesis given a set of observations. The total number of published papers in at the ACL-2018 conference is 439. natural choices for reporting the results of experiments (Kruschke and Liddell, 2018). 3.2 Measures of Certainty A key difference is that not all methods studied here provide a measure of uncertainty over the hypothesis space. For instance, p-values (§2.1) do not provide probability estimates on two systems being different (or equal) (Goodman, 2008). On the contrary, they encourage binary thinking (Gelman, 2013), that is, confidently concluding that one system is better than another, without taking into account the extent of the difference between the systems. CIs (§2.2) provide a range of values for the target parameter. However, this range also does not have any probabilistic interpretation in the hypothesis space (du Prel et al., 2009). On the other hand, posterior intervals (§2.3) generally provide a useful summary as they capture probabilistic estimates of the correctness of the hypothesis. 3.3 Dependence on Stopping Intention The process by which samples in the test are collected can affect the outcome of a test. For instance, the sample size n (whether it is determined before the process of gathering information begins, or is a random variable itself) can change the result. Once observations are recorded, this distinction is usually ignored. Hence, the testing algorithms that do not depend on the distribution of n are more desirable. Unfortunately, the definition of p-value (§2.1) depends on the distribution of n. For instance, Kruschke (2010, §11.1) provides examples where this subtlety can change the outcome of a test, even when the final set of observations is identical. 3.4 Sensitivity to the Choice of Prior The choice of the prior can change the outcome of Bayesian approaches (§2.3 & §2.4). Decisions of Bayes Factor (§2.4) are known to be sensitive to the choice of prior, while posterior estimates (§2.3) are less so. For further discussion, see C.4 or refer to discussions by Sinharay and Stern (2002); Liu and Aitkin (2008) or Dienes (2008). 4 Current Trends and Malpractices This section highlights common practices relevant to the our target approaches. To better understand the common practices or misinterpretations in the field, we conducted a survey. We shared the survey among ∼450 NLP researchers (randomly selected from ACL’18 Proceedings) from which 55 individuals filled out the survey. While similar surveys have been performed in other fields (Windish et al., 2007), this is the first in the NLP community, to the best of our knowledge. Here we review the main highlights (see Appendix for more details and charts). Interpreting p-values. While the majority of the participants have a self-claimed ability to interpret p-values (Figure 9f), many choose its imprecise interpretation “The probability of the observation this extreme happening due to pure chance” (the popular choice) vs. a more precise statement “Conditioned on the null hypothesis, the probability of the observation this extreme happening.” (see Q1 & Q2 in Appendix B.) The use of CIs. Even though 95% percent of the participants self-claimed the knowledge of CIs (Figure 9e), it is rarely used in practice. In an annotation done on ACL’18 papers by two of the authors, only 6 (out of 439) papers were found to use CIs. The use of Bayes Factors. A majority of the participants had “heard” about “Bayesian Hypothesis Testing” but did not know the definition of “Bayes Factor” (Figure 3). HDIs (discussed in §2.3) were the least known. We did not find any papers in ACL’18 that use Bayesian tools. 5722 Have you heard about "Bayesian Hypothesis Testing"? I have used "hypothesis testing" in the past (in a homework, a paper, etc). Do you know the definition of "Bayes Factor"? Do you know the definition of "Highest Density Interval"? Figure 3: Select results from our survey. The use of “significan*”. A notable portion of NLP papers express their findings by using the term “significant” (e.g., “our approach significantly improves over X.”) Almost all ACL’18 papers use the term “significant”10 somewhere. Unfortunately, there is no single universal interpretation of such phrases across readers. In our survey, we observe that when participants read “X significantly improves Y” in the abstract of a hypothetical paper: 1. About 82% expect the claim to be backed by “hypothesis testing”; however, only 57% expect notable empirical improvement (see Q3 in Appendix B); 2. About 35% expect the paper to test “practical significance”, which is not generally assessed by popular tests (see §C.2); 3. A few also expect a theoretical argument. Recent trends. Table 3 provides a summary of the techniques studied here. We make two key observations: (i) many papers don’t use any hypothesis assessment method and would benefit from one; (ii) from the final column, p-value based techniques clearly dominate the field, a clear disregard to the advantages that the bottom two alternatives offer. 10Or other variants “significantly”, “significance”, etc. 5 Recommended Practices Having discussed common issues, we provide a collection of recommendations (in addition to the prior recommendations, such as by Dror et al. (2018)). The first step is to define your goal. Each of the tools in §2 provides a distinct set of information. Therefore, one needs to formalize a hypothesis and consequently the question you intend to answer by assessing this hypothesis. Here are four representative questions, one for each method: 1. Assuming that the null-hypothesis is true, is it likely to witness observations this extreme? (§2.1) 2. How much my null-hypothesis can deviate from the mean of the observations until a p-value argument rejects it. (§2.2) 3. Having observed the observations, how probable is my claimed hypothesis?(§2.3) 4. By observing the data how much do the odds increase in favor of the hypothesis?(§2.4) If you decide to use frequentist tests: • Check if your setting is compatible with the assumptions of the test. In particular, investigate if the meaning of null-hypothesis and sampling distribution match the experimental setting. • Include a summary of the above investigation. Justify unresolved assumption mismatches. • Statements reporting p-value and confidence interval must be precise enough so that the results are not misinterpreted (see §3.1). • The term “significant” should be used with caution and clear purpose to avoid misinterpretations (see §4). One way to achieve this is by using adjectives “statistical” or “practical” before any (possibly inflected) usage of “significance.” • Often times, the desired conclusion is a notable margin in the superiority of one system over another (see §3). In such cases, a pointwise pvalue argument is not sufficient; a confidence interval analysis is needed. If CI is inapplicable for some reason, this should be mentioned. If you decide to use Bayesian approaches: • Since Bayesian tests are less known, it is better to provide a short motivation for the usage. • Familiarize yourself with packages that help you decide a hierarchical model, e.g., the software provided here. If necessary, customize these models for your specific problem. • Be clear about your hierarchical model, including model parameters and priors. In most cases, these choices should be justified (see §2.3.) 5723 Statistical Model Observation Type Hierarchical Model Assumptions Parameters Common settings / metrics Common Frequentist test (Parametric) Common Frequentist test (Non-Parametric) Binary model binary output Bernoulli distribution with Beta prior 2 For each group: p ∈ [0,1] (success probablity) correct vs incorrect predictions Binomial test bootstrap / permutation Binomial model binomial output Binomial distribution with Beta prior 2,3,6 For each group: p ∈ [0,1] (success probablity) Exact match, Accuracy, Recall, UAS (sentencelevel), LAS (sentencelevel)* Binomial test bootstrap / permutation Metric model metric observations T-Student distribution with muliple priors * 1,2,4 For each group: mu ∈ R and sigma ∈ R+ Shared between groups: nu ∈ R+ (normality parameter) Exact match, Accuracy, Recall, UAS (sentencelevel), LAS (sentencelevel), running time. energy usage, L2 error t-test bootstrap / permutation Count model counts Negative Binomial distribution with Normal prior 2,5 For each group: mu ∈ R+ (rate parameter) alpha ∈ R+ (shape parameter) The count of certain patterns an algorithm could find in a big pool, in a fixed amount of time. Notice that you can’t convert this into a ratio form, since there is no welldefined denominator. Ex: measuring how many of questions could be answered correctly (from an infinite pool of questions) by a particular QA systems, in a limited minute (the system is allowed to skip the questions too) bootstrap / permutation Ordinal model ordinals Normal distribution with parameterized tresholds 2 For each group: mu ∈ R and sigma ∈ R+ Shared between groups: thresholds between possibe levels Collection of objects/labels arranged in a certain ordering, not necessarily with a metric distance between them; for example sentiment labels (https://www. aclweb.org/anthology/S16-1001.pdf), product review categories, grammaticality of sentences bootstrap / permutation Assumption 1: The observations are distributed as a t-student with unknown normality parameter (a normal distribution with potentially longer tales). Assumption 2: The observations from each group are assumed to be i.i.d, conditioned on the inherent characterstics of two systems Assumption 3: The total number of instances (the denominators) is known. Assumption 4: The variable is inherently continuous, or the granularity (the denominator) is high enough to treat the variable as continuous. Assumption 5: The observations follow a Negative-Binomail / Poisson distribution. Assumption 6: The observations follow a binomial-distribution. Assumption 7: The observations follow a normal distribution. * In this model (unlike frequentist t-test) outliers don't need to be discarded manually to realize the strict normality assumption. Table 4: Select models supported by our package HyBayes at the time of this publication. • Comment on the certainty (or the lack of) of your inference in terms of HDI and ROPE: (I) is HDI completely inside ROPE, (II) they are completely disjoint, (III) HDI contains values both inside and outside ROPE (see §2.3.) • For reproducibility, include further details about your test: MCMC traces, convergence plots, etc. (Our HyBayes package provides all of this.) • Be wary that Bayes Factor is highly sensitive to the choice of prior (see §3.4). See Appendix §C.4 for possible ways to mitigate this. 5.1 Package HyBayes We provide an accompanying package, HyBayes, to facilitate comparing systems using the two Bayesian hypothesis assessment approaches discussed earlier: (a) posterior probabilities and (b) Bayes Factors. (Several packages are already available for frequentist assessments.) Table 4 summarizes common settings in which HyBayes can be employed11 in NLP research, including typical use cases, underlying data assumptions, recommended hierarchical model, metrics (accuracy, exact match, etc.), and frequentist tests generally used in these cases. These settings cover several typical assumptions on observed NLP data. However, if a user has specific information on observations or can capitalize on other assumptions, we recommend adding a custom model, which can be done relatively easily. 11These settings are available at the time of this publication, with more options likely to be added in the future. 6 Conclusion Using well-founded mechanisms for assessing the validity of hypotheses is crucial for any field that relies on empirical work. Our survey indicates that the NLP community is not fully utilizing scientific methods geared towards such assessment, with only a relatively small number of papers using such methods, and most of them relying on p-value. Our goal was to review different alternatives, especially a few often ignored in NLP. We surfaced various issues and potential dangers of careless use and interpretations of different approaches. We do not recommend a particular approach. Every technique has its own weaknesses. Hence, a researcher should pick the right approach according to their needs and intentions, with a proper understanding of the techniques. Incorrect use of any technique can result in misleading conclusions. We contribute a new toolkit, HyBayes, to make it easy for NLP practitioners to use Bayesian assessment in their efforts. We hope that this work provides a complementary picture of hypothesis assessment techniques for the field and encourages more rigorous reporting trends. Acknowledgments The authors would like to thank Rotem Dror, Jordan Kodner, and John Kruschke for invaluable feedback on an early version of this draft. This work was partly supported by a gift from the Allen Institute for AI and by DARPA contracts FA8750-19-2-1004 and FA875019-2-0201. 5724 References Valentin Amrhein, Fr¨anzi Korner-Nievergelt, and Tobias Roth. 2017. The earth is flat (p > 0.05): significance thresholds and the crisis of unreplicable research. PeerJ, 5:e3544. Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in NLP. In Proceedings of EMNLP, pages 995–1005. James O Berger and Thomas Sellke. 1987. Testing a point null hypothesis: The irreconcilability of p values and evidence. Journal of the American Statistical Association, 82(397):112–122. Nancy Chinchor. 1992. The statistical significance of the MUC-4 results. In Proceedings of the 4th conference on Message understanding, pages 30–50. Association for Computational Linguistics. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? Try ARC, the AI2 Reasoning Challenge. CoRR, abs/1803.05457. Janez Demˇsar. 2008. On the appropriateness of statistical tests in machine learning. In Workshop on Evaluation Methods for Machine Learning in conjunction with ICML, page 65. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL, pages 4171– 4186. Zoltan Dienes. 2008. Understanding psychology as a science: An introduction to scientific and statistical inference. Macmillan International Higher Education. Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhikers guide to testing statistical significance in natural language processing. In Proceedings of ACL, pages 1383–1392. Rotem Dror and Roi Reichart. 2018. Recommended statistical significance tests for NLP tasks. arXiv preprint arXiv:1809.01448. Stefan Evert. 2004. Significance tests for the evaluation of ranking methods. In Proceedings of COLING. Dani Gamerman and Hedibert F Lopes. 2006. Markov chain Monte Carlo: stochastic simulation for Bayesian inference. CRC Press. Andrew Gelman. 2013. The problem with p-values is how they’re used. Steven Goodman. 2008. A dirty dozen: Twelve pvalue misconceptions. Seminars in Hematology, 45(3):135–140. Yvette Graham, Nitika Mathur, and Timothy Baldwin. 2014. Randomized significance tests in machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 266–274. Rink Hoekstra, Richard D Morey, Jeffrey N Rouder, and Eric-Jan Wagenmakers. 2014. Robust misinterpretation of confidence intervals. Psychonomic bulletin & review, 21(5):1157–1164. Jeehyoung Kim and Heejung Bang. 2016. Three common misuses of p values. Dental hypotheses, 7(3):73. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP. John K Kruschke. 2010. Bayesian data analysis. Wiley Interdisciplinary Reviews: Cognitive Science, 1(5):658–676. John K Kruschke. 2018. Rejecting or accepting parameter values in bayesian estimation. Advances in Methods and Practices in Psychological Science, 1(2):270–280. John K Kruschke and Torrin M Liddell. 2018. The bayesian new statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a bayesian perspective. Psychonomic Bulletin & Review, 25(1):178–206. Charles C Liu and Murray Aitkin. 2008. Bayes factors: Prior sensitivity and model generalizability. Journal of Mathematical Psychology, 52(6):362–375. N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, and E. Teller. 1953. Equation of state calculations by fast computing machines. The journal of chemical physics, 21:1087. Markus Ojala and Gemma C Garriga. 2010. Permutation tests for studying classifier performance. JMLR, 11(Jun):1833–1863. Travis E Oliphant. 2006. A Bayesian perspective on estimating mean, variance, and standard-deviation from data. Technical report, Brigham Young University. https://scholarsarchive.byu.edu/facpub/278/. Jean-Baptist du Prel, Gerhard Hommel, Bernd R¨ohrig, and Maria Blettner. 2009. Confidence interval or p-value?: Part 4 of a series on evaluation of scientific publications. Deutsches ¨Arzteblatt International, 106(19):335. Stefan Riezler and John T Maxwell. 2005. On some pitfalls in automatic evaluation and significance testing for mt. In Proceedings of the ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 57– 64. Sandip Sinharay and Hal S Stern. 2002. On the sensitivity of Bayes factors to the prior distributions. The American Statistician, 56(3):196–201. 5725 Anders Søgaard, Anders Johannsen, Barbara Plank, Dirk Hovy, and H´ector Mart´ınez Alonso. 2014. What’s in a p-value in NLP? In Proceedings of CoNLL, pages 1–10. Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. 2018. Improving machine reading comprehension with general reading strategies. In Proceedings of NAACL. David Trafimow and Michael Marks. 2015. Editorial. Basic and Applied Social Psychology, 37(1):1–2. Ronald L Wasserstein, Nicole A Lazar, et al. 2016. The ASAs statement on p-values: context, process, and purpose. The American Statistician, 70(2):129–133. Donna M Windish, Stephen J Huot, and Michael L Green. 2007. Medicine residents’ understanding of the biostatistics and results in the medical literature. Jama, 298(9):1010–1022.
2020
506
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5726–5735 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5726 STARC: Structured Annotations for Reading Comprehension Yevgeni Berzak MIT BCS [email protected] Jonathan Malmaud MIT BCS [email protected] Roger Levy MIT BCS [email protected] Abstract We present STARC (Structured Annotations for Reading Comprehension), a new annotation framework for assessing reading comprehension with multiple choice questions. Our framework introduces a principled structure for the answer choices and ties them to textual span annotations. The framework is implemented in OneStopQA, a new high-quality dataset for evaluation and analysis of reading comprehension in English. We use this dataset to demonstrate that STARC can be leveraged for a key new application for the development of SAT-like reading comprehension materials: automatic annotation quality probing via span ablation experiments. We further show that it enables in-depth analyses and comparisons between machine and human reading comprehension behavior, including error distributions and guessing ability. Our experiments also reveal that the standard multiple choice dataset in NLP, RACE (Lai et al., 2017), is limited in its ability to measure reading comprehension. 47% of its questions can be guessed by machines without accessing the passage, and 18% are unanimously judged by humans as not having a unique correct answer. OneStopQA provides an alternative test set for reading comprehension which alleviates these shortcomings and has a substantially higher human ceiling performance.1 1 Introduction Assessment of reading comprehension is of paramount importance in education and science and is a key component of high-stakes evaluations such as the SAT examinations. Reading comprehension tasks are also central to NLP, where extensive efforts are invested in developing systems that try to match human-level performance. Despite 1OneStopQA dataset, STARC guidelines and human experiments data are available at https://github.com/ berzak/onestop-qa the proliferation of NLP work on reading comprehension and the increasing number of large-scale reading comprehension datasets, key quality assurance issues such as question guessability, unwanted dataset biases, and the considerable success of simple pattern matching and slot filling heuristics remain open challenges for ensuring that evaluation benchmarks capture genuine reading comprehension. Further, existing annotation frameworks have very limited support for reading behavior analyses which go beyond simple accuracy statistics. In this work, we introduce STARC, a new annotation framework for multiple choice reading comprehension, which addresses these shortcomings. Our framework aims to ensure high annotation quality and supports detailed probing and comparisons of human and machine reading comprehension behavior. The following are the primary novel characteristics of our annotation scheme. Structured Answer Choices As opposed to existing multiple choice reading comprehension datasets, our framework has a principled and consistent answer structure. Specifically, every question has four possible answers. The first answer is the correct answer. Importantly, the correct answer typically does not appear verbatim in the passage. The second answer represents a misunderstanding of the critical information for answering the question correctly. The third answer refers to information in the passage that is not relevant for the question. The fourth distractor has no support in the passage. This structure reflects four fundamental types of responses, ordered by miscomprehension severity. Auxiliary Span Annotations To further enhance the versatility of the annotation scheme, the framework provides span annotations for the different answer choices. This approach creates a systematic correspondence between answers and their textual support. Specifically, the correct answer relies on a critical span which contains the 5727 essential information for answering the question. In contrast to span identification datasets such as SQUAD (Rajpurkar et al., 2016) and Natural Questions (Kwiatkowski et al., 2019), we do not consider the span as the correct answer, but rather as a text region that contains the critical information required for answering the question correctly. The second answer represents a misunderstanding of that same span. Finally, the information referred to in the third answer is marked in a distractor span. In this paper we demonstrate that the combination of a consistent answer structure with span annotations opens the door for new approaches to automatic verification of annotations and enables new types of analyses for reading comprehension. We further introduce OneStopQA, a new dataset for multiple choice reading comprehension which implements our annotation framework. OneStopQA is a carefully constructed high-quality dataset intended primarily for testing and analyses, thereby complementing the existing larger multiple choice dataset RACE (Lai et al., 2017), which also has a 4-answer format and is commonly used for training. OneStopQA is designed to be challenging for both machine and human readers. The dataset comprises 30 articles from the Guardian in three parallel text difficulty versions and contains 1,458 paragraph-question pairs with multiple choice questions, along with manual span markings for both correct and incorrect answers. Despite its shorter passages and more constrained annotation scheme, baselines perform worse on OneStopQA than on RACE and the performance of a state-of-the-art model is comparable on both datasets. We use OneStopQA to introduce an ablationbased framework for automatic verification of multiple choice reading comprehension materials and to measure the extent to which the dataset can be solved without performing reading comprehension. Our framework is inspired by prior work on tasks such as image captioning and Visual Question Answering (VQA), where models were shown to perform well despite limited reliance on the images or the questions (Jabri et al., 2016; Agrawal et al., 2016; Goyal et al., 2017; Chao et al., 2018). We utilize this framework to demonstrate the validity of OneStopQA annotations and their robustness to heuristics. Our analyses further reveal quality control issues in RACE. Machine readers are able to guess the correct answers to 47.1% of the questions in RACE without being exposed to the passage, as opposed to 37.2% for OneStopQA. When presented to humans via crowdsourcing, 18.3% of the questions in RACE are unanimously judged by three annotators as not having a single correct answer, compared to only 3.4% for OneStopQA. Using this human data, we establish an approximate ceiling above which model performance improvements are not likely to be meaningful: 88.8% on RACE and 97.9% on OneStopQA. We further verify this ceiling approximation with an in-lab human reading comprehension experiment in which we obtain a superior empirical human ceiling of 95.3% for OneStopQA as compared to 84.7% for RACE. These results are consequential in that state-of-the-art models are already around ceiling performance on RACE, while substantial room for improvement is still available for OneStopQA. Finally, we showcase how the structure of OneStopQA annotations can be used for detailed comparisons between human and machine readers. Specifically, we demonstrate that human subjects and a state-of-the-art machine reading comprehension model have similar distributions of erroneous answers, suggesting a deeper link between human and machine readers than previously reported. On the other hand, humans and machines are fundamentally different in their guessing behavior. To summarize, the primary contributions of this work are the following: • We present STARC, an annotation framework for reading comprehension which combines structured answers with span annotations for both correct answers and distractors. • We annotate and release OneStopQA, a dataset which adheres to this framework. • We introduce a new methodology which leverages our annotations for automated data quality probing via ablation experiments. • We showcase the value of the annotation framework for detailed analyses of human and machine reading comprehension behavior. • Our experiments reveal that RACE is highly guessable and has a relatively low human ceiling due to low item quality in a large portion of the questions. OneStopQA does not have these drawbacks and can serve as an alternative out-of-domain challenge dataset for evaluations, compatible with training on RACE. 5728 The combination of the novel annotation framework and the presented experiments suggests that the proposed annotation framework and our dataset can improve both the depth and the breadth of reading comprehension evaluations. 2 STARC Annotation Scheme STARC is a new annotation framework accompanied by a protocol for increasing annotation quality and reducing annotation biases which can be exploited by either humans or machines for solving reading comprehension datasets without performing the intended task. The annotation scheme aims for the questions to be on a high difficulty level. Importantly, STARC tries to minimize the possibility of answering questions correctly using simple string-matching strategies, as well as guessing the correct answer without reading the passage. To focus on testing language comprehension, as opposed to other types of skills and knowledge, it aims to avoid questions that rely on numerical reasoning and substantial external world knowledge. It also refrains from questions that require the reader to speculate (for example, given some information on person X, ask about their likely position issue Y when this position is not stated in the text). Reading comprehension questions have four answers, structured in the following manner. A is the correct answer. Answering a question correctly requires comprehending information from a text span in the passage called the critical span. Importantly, with exceptions when necessary, the correct answer should not appear in the critical span in verbatim form. B is an incorrect answer which represents a plausible misunderstanding of the critical span. C is an incorrect answer which refers to an additional span in the passage, called the distractor span. This answer can be anchored in the distractor span in various ways. For example, it may borrow keywords, or contain a correct fact that is stated in the distractor span but is not the correct answer to the question. D is an incorrect answer which is plausible a-priori, but has no support in the passage. Note that to be plausible, D often appeals to the reader’s general world knowledge. Neither the critical span nor the distractor span have to adhere to sentence boundaries, and both can be non-continuous. This structure introduces well-defined and consistent relations between the answers and the passage. Further, the answers are ordered by degree of comprehension, whereby A represents correct comprehension, B reflects the ability to identify the crucial information for answering the question but failure to comprehend it, C reflects some degree of attention to the passage’s content, and D provides no evidence for text comprehension. The utilization of B-type answers in particular enables probing comprehension at a deep level. The overall answer structure can support new types of error analyses beyond the correct/incorrect distinction by examining specific types of miscomprehension and their relation to the text. In order to reduce the effectiveness of answer elimination strategies, we developed additional guidelines on the joint form and content of the answers. These include a quality ranking of answer patterns, where the most preferred structures are those in which all answers have either similar phrasings or distinct phrasings. For all other patterns (e.g. three similarly worded answers and an outstanding answer), the answer types for the pattern should be distributed equally across questions. The guidelines also list dispreferred content relations between answers, such as B being the opposite of A. Finally, the guidelines specify that the answers across, and whenever possible within questions should be of comparable length. 3 OneStopQA Dataset We implemented the STARC annotation framework in a new reading comprehension dataset, OneStopQA. The textual materials of OneStopQA are drawn from the OneStopEnglish corpus (Vajjala and Luˇci´c, 2018), which contains Guardian News Lessons articles from the English language learning portal onestopenglish.com by Macmillan Education. We chose articles that have non-repetitive content, and collectively represent a diverse range of topics. The texts were cleaned from errors stemming from the conversion process from the original PDFs to plain text, and manually converted from British to American English spelling. Each article has three versions, corresponding to three text difficulty levels: Advanced, Intermediate and Elementary. The Advanced version is the original Guardian article. The Intermediate and Elementary articles are simplified versions of the original article created by professional editors at onestopenglish.com. Common simplifications 5729 RACE OneStopQA Middle High Ele Int Adv Passages 6,409 / 368 / 362 18,728 / 1,021 / 1,045 162 162 162 Questions 25,421 / 1,436 / 1,436 62,445 / 3,451 / 3,498 486 486 486 Words per passage 232.12 354.08 112.32 126.97 138.6 Sentences per passage 16.6 17.99 5.42 5.4 5.36 Words per sentence 13.99 19.69 20.72 23.53 25.84 Flesh Kincaid 3.24 7.06 7.32 8.9 10.1 SMOG 7.58 10.14 10.29 11.4 12.21 Table 1: RACE and OneStopQA corpus statistics. The term “passage” refers to a single paragraph in OneStopQA and a single article in RACE. Values for the number of RACE passages and questions are formatted as Train / Dev / Test, while the remaining RACE values are calculated across the entire dataset. The readability measures Flesh Kincaid (Kincaid et al., 1975) and SMOG (Laughlin, 1969) are heuristic estimates of the number of education years required to fully comprehend the text. Advanced A major international disagreement with wide-ranging implications for global drugs policy has erupted over the right of Bolivia’s indigenous Indian tribes to chew coca leaves, the principal ingredient in cocaine. Bolivia has obtained a special exemption from the 1961 Single Convention on Narcotic Drugs, the framework that governs international drugs policy, allowing its indigenous people to chew the leaves. Bolivia had argued that the convention was in opposition to its new constitution, adopted in 2009, which obliges it to “protect native and ancestral coca as cultural patrimony” and maintains that coca “in its natural state ... is not a dangerous narcotic.” Elementary A big international disagreement has started over the right of Bolivia’s indigenous Indian tribes to chew coca leaves, the main ingredient in cocaine. This could have a significant effect on global drugs policy. Bolivia has received a special exemption from the 1961 Convention on Drugs, the agreement that controls international drugs policy. The exemption allows Bolivia’s indigenous people to chew the leaves. Bolivia said that the convention was against its new constitution, adopted in 2009, which says it must “protect native and ancestral coca” as part of its cultural heritage and says that coca “in its natural state ... is not a dangerous drug.” Q What was the purpose of the 1961 Convention on Drugs? A Regulating international policy on drugs B Discussing whether indigenous people in Bolivia should be allowed to chew coca leaves C Discussing the legal status of Bolivia’s constitution D Negotiating extradition agreements for drug traffickers Table 2: A question example with annotations for the Advanced and Elementary versions of the paragraph (note that the complete annotation contains two additional questions and the Intermediate paragraph level). The critical span is marked in bold red. The distractor span is marked in italic blue. include text removal, sentence splitting and text rewriting. In a few cases, the edits also include changes to the presentation order of the content. OneStopQA has 30 articles, with 4 to 7 paragraphs per article, and a total of 162 paragraphs. Each paragraph has 3 to 12 sentences. Further statistics on OneStopQA and RACE articles along with readability estimates for the different text difficulty levels are presented in Table 1. We note that OneStopQA paragraphs are considerably shorter than RACE articles. At the same time, even the Elementary version of OneStopQA has longer sentences and higher text difficulty level compared to the High School version of RACE. We composed three reading comprehension questions for each paragraph, resulting in 486 questions, and 1,458 question-paragraph pairs when considering all three text versions. All the questions are answerable based on any of the three difficulty levels of the paragraph. Furthermore, the questions are local to the paragraph; they are answerable without any additional information from the preceding nor the following paragraphs. All the spans were annotated manually for each question in all three versions of the paragraph. Two of the questions have the same or substantially overlapping critical spans, and the third question has a distinct critical span. No restrictions were imposed on the distractor spans. Statistics for the questions, answers and spans are presented in Table 3. Table 2 presents an annotated question for two paragraph difficulty levels. Appendix A contains details on the dataset development and piloting process. 4 Experiments We report a series of experiments which assess human and machine reading comprehension on OneStopQA and compare it to RACE. We further 5730 Definition Answer Span Span Length Length A correct 7.2 (3.5) critical 37.9 (16.5) B incorrect 7.6 (3.6) C incorrect 8.1 (3.8) distractor 15.5 (11.8) D incorrect 6.9 (3.1) N/A N/A Table 3: STARC answer structure, and mean length (in words) of answers and spans in OneStopQA (standard deviation in parentheses). 50% of the A spans comprise of more than one sentence. The mean OneStopQA question length is 11.2 words. In RACE, the mean question length is 10.0 and the mean answer length is 5.3. showcase the ability of our annotation framework to support automated dataset quality validation and enable in-depth comparisons between human and machine reading comprehension behavior. 4.1 Benchmarking Machine Reading Comprehension Performance In this experiment, we benchmark two neural reading comprehension models, the Stanford Attentive Reader (AR) (Chen et al., 2016), and RoBERTA (Liu et al., 2019) a state-of-the-art model on RACE. We train the models on RACE, and evaluate their accuracy on RACE and OneStopQA. To reduce the impact of potential domain differences, we also provide an evaluation in which we further finetune the models on OneStopQA with 5-fold cross validation, where in each fold 18 articles are used for training, 6 for development and 6 for testing. Additionally, we report the performance of the commonly used sliding window baseline (Richardson et al., 2013). In parallel with the two neural model evaluation regimes for OneStopQA, we perform two evaluations for this baseline, one in which the window size is optimized on the RACE development set, and one in which it is optimized on OneStopQA using 5-fold cross validation. Table 4 presents the results of this experiment. We observe that the two weaker models, Sliding Window and Stanford AR, perform better on RACE than on OneStopQA. Particularly notable is the large drop in the performance of Stanford AR from 42.8 on RACE to 34.3 on OneStopQA (p ≪.001, ttest). This suggests that OneStopQA is more robust to simple word-matching heuristics. The results for RoBERTa are comparable on OneStopQA and on RACE. We note that overall this is a strong outcome for OneStopQA in light of its span-based format, shorter paragraphs, and higher human ceiling performance which we discuss in Section 4.3. We further note that finetuning on OneStopQA preserves or improves performance across models by a small margin. Finally, the difficulty level of OneStopQA paragraphs has only a small and inconsistent effect on model performance. 4.2 Ablation-based Data Quality Probing We introduce a new methodology for analyzing the quality of reading comprehension datasets through ablation studies. This methodology enables evaluating the robustness of OneStopQA to guessing heuristics and the validity of the relation between the answers and the span annotations. In each ablation study, we train and evaluate the performance of RoBERTa without a part of the textual input. The ablation studies are divided into two groups: • Full component ablations, applicable to any multiple choice reading comprehension dataset. In these experiments we withhold either the question, the passage or both during the training and testing of the model. • Span ablations, which are enabled by the STARC annotations and hence apply only to OneStopQA. In the span ablation experiments we remove parts of the passage according to the span markings. These experiments enable empirical validation of the relation between answers and spans. We report the results of these ablation studies in the RoBERTa portion of Table 5. Full component ablations When removing the passage, we obtain an accuracy of 37.2% on OneStopQA, and comparable choice rates among the distractors. This is a key result which suggests that RoBERTa is not able to recover substantial information about the correct answer without the passage and provides evidence for the a-priori plausibility of all three distractor types. In contrast to this outcome, on RACE, the passage ablation experiment yields a significantly higher accuracy of 47.1 (p ≪0.001, t-test). The ability of RoBERTa to guess the correct answers to nearly half of the questions in RACE without requiring the passage leads to a credit assignment issue, where 22% of RoBERTa’s performance on this dataset could in principle be attributed to question and answer patterns rather than reading comprehension. We next exclude the question and find that OneStopQA is less robust than RACE in this 5731 RACE OneStopQA (no finetuning) OneStopQA Mid High All Ele Int Adv All Ele Int Adv All Sliding Window 41.2 31.0 33.9 25.6 26.2 27.5 26.7 27.7 27.2 27.3 28.2 Stanford AR 40.0 43.9 42.8 30.2 30.1 30.1 30.2 34.2 34.3 34.3 34.3 RoBERTa Base 73.2 66.4 68.4 69.5 69.1 67.7 68.8 68.7 69.1 68.5 68.8 RoBERTa Large 86.6 81.3 82.9 85.6 85.0 86.0 85.6 86.0 85.4 86.4 86.0 Table 4: QA Accuracy on RACE and OneStopQA. Random baseline on both datasets is 25.0. In “OneStopQA (no finetuning)” the models are trained for QA only on RACE. In “OneStopQA” the models are trained on RACE and further finetuned on OneStopQA. RACE OneStopQA Mid High All Ele Int Adv All B C D RoBERTa Full Information 86.6 81.3 82.9 86.0 85.4 86.4 86.0 8.9 3.0 2.1 No passage 46.4 47.3 47.1 37.2 19.0 19.9 23.9 No Q 61.4 60.6 60.8 67.7 68.9 69.9 68.8 15.5 13.3 2.4 No Q & No passage 37.8 40.9 40.0 34.7 20.0 20.4 24.9 Only critical span 89.3 86.8 85.8 87.3 10.4 0.6 1.7 No distractor span 88.5 85.6 87.4 87.2 9.1 1.7 1.9 No critical span 42.0 40.1 41.1 41.1 20.6 14.9 23.5 Humans Prolific QA 85.8 70.3 74.8 81.7 79.7 80.7 10.3 6.8 2.2 Prolific No passage 42.8 37.8 39.3 31.9 21.1 19.5 27.5 Prolific % Consensus invalid Q 8.0 22.5 18.3 2.5 4.3 3.4 Approximate ceiling 94.7 86.4 88.8 98.5 97.2 97.9 In-lab QA 90.7 82.2 84.7 96.3 94.4 95.3 2.3 1.9 0.5 Table 5: Ablation experiments using RoBERTa Large and Human reading comprehension experiments. regime, with an accuracy of 68.8 compared to 60.8 (p < 0.001, t-test). This result is likely reflecting the fact that unlike in RACE, the correct answer in OneStopQA is always stated or can be directly inferred from the passage. We note that compared to the no-passage ablation, the presence of the passage eliminates D as expected. Interestingly, the relative choice rate for C is high for the no-question ablation compared to the full model, suggesting that RoBERTa is able to rule out C only in the presence of the question. This is a desirable behavior, consistent with the requirement for the C distractor to contain information from the passage which could be possibly correct, while not being a correct answer to the question. Finally, 40.0 percent of the RACE questions are guessable even when both the question and the passage are not provided, compared to 34.7 for OneStopQA (p ≪0.001, t-test). Span ablations In the OneStopQA span ablation experiments, providing RoBERTa only with the critical span makes it focus on A and B as the only viable options, as expected. A similar C elimination outcome is obtained when the ablation is targeted at the distractor span only. Finally, removing the critical span, which should make the question unanswerable, results in a sharp drop in performance to an accuracy of 41.1, only 3.9% above withholding the entire passage. Interestingly, the selection rate of C is lower compared to the full passage ablation, an outcome we intend to investigate further in the future. Overall, these results confirm the robustness of OneStopQA to guessing as well as the tight correspondence between answers and spans. We envision extending this framework in the future for automatic identification of specific items with problematic annotations which could substitute item pilots with human subjects. 4.3 Human Reading Comprehension In these experiments we assess human reading performance and guessing behavior, and further investigate OneStopQA and RACE question quality.2 • Question Answering (QA) This experiment benchmarks human question answering performance. Participants are presented with a passage along with a question and its four answers, and are asked to select the correct answer based on the passage. After confirming their selection, participants are informed on whether they answered correctly and shown the correct answer. 2The human subject data was collected under MIT IRB protocol #1605559077 - “Cognitive Foundations of Human Language Processing and Acquisition”. All subjects provided written consent prior to participation. 5732 • Guessing (No Passage) The goal of this experiment is to determine the extent to which humans can guess the correct answer to questions without reading the passage. Participants see only the question and its four answers and are asked to provide their best guess for the correct answer. After confirming their selection, participants are informed on whether it was correct and shown the correct answer along with the passage. • Question Validity Judging This experiment is designed to identify questions which do not have a unique correct answer. Participants are presented with the question, answers and the passage, and are asked to indicate whether the question has (A) one correct answer, (B) more than one correct answer, or (C) no correct answer. If (A) is selected, the participant further selects the correct answer. If (B) is selected, the participant is asked to mark all the answers that they consider to be correct. We deployed all three experiments on the crowdsourcing platform Prolific (prolific.co), with a 6 trials batch for each subject. The first two trials were fixed practice items, one with a passage from OneStopQA and one from RACE. These trials were tailored for each experiment such that performing the respective task correctly is straightforward. Next, each participant performed 4 experimental trials. Two of the trials had passages from OneStopQA (one Advanced and one Elementary, taken from different articles), and two were from RACE (one Middle School and one High School). To encourage participants to perform the tasks well, in the QA and Guessing experiments participants received a monetary bonus for each correct answer. In all three experiments, participants who did not answer both practice trials correctly were excluded from the analysis. The materials for each of the three Prolific experiments are 1296 question-passage pairs, 648 from OneStopQA and 648 from RACE. The OneStopQA items are taken from 20 OneStopQA articles, with a total of 108 paragraphs. For each paragraph we use two paragraph difficulty levels - Advanced and Elementary, combined with each of the 3 questions. The RACE materials include 108 Middle School articles and 108 High School articles from the RACE test set. We chose the articles at random among the articles that have three or more questions, and then randomly picked 3 questions for each article. In each of the three Prolific experiments we collected responses from three valid participants (i.e. participants who answered both practice trials correctly) for each question-passage pair. A single participant completed one batch in one of the three experiments, corresponding to a total of 2,916 unique participants (792 per experiment). Even in the presence of monetary incentives and participant filtering based on practice trials, it is hard to guarantee that crowd-sourcing workers are always performing the given task attentively. We therefore further ran the QA experiment with in-lab participants. For this experiment, we used a subset of 432 questions from the Prolific experiments’ materials. We recruited 12 participants (6 undergraduate students and 6 post-graduate students), each completing 36 items. The items given to each participant were equally distributed between datasets and text difficulty levels, and guaranteed not to repeat the same article for RACE and the same paragraph for OneStopQA. The results of the human reading comprehension experiments are presented in the “Humans” portion of Table 5. Comparisons were calculated using Satterthwaite’s method applied to a mixed-effects model that treats subjects and questions as crossed random effects. All the experiments suggest clear advantages of OneStopQA as compared to RACE. In the Prolific QA experiment, participants obtain a higher overall accuracy of 80.7 on OneStopQA compared to 74.3 on RACE (p < 0.001). We note that our QA experiment reproduces the Mechanical Turk experiment in (Lai et al., 2017), which yielded a similar human performance of 73.3 on RACE. In the Guessing experiment, we observe that without exposure to the passage, participants were able to obtain an accuracy of 32.1 on OneStopQA as compared to 39.5 on RACE (p ≪0.001). For the Question Validity Judging experiment we report the percentage of questions on which all three participants have indicated that the question does not have a unique answer. This metric reveals a dramatic advantage of OneStopQA, with 3.4% of invalid questions as compared to 18.3% for RACE (p ≪0.001). We note that this result is substantially different from the percentage of invalid questions reported in Lai et al. (2017), where the authors have estimated that only 5.5% of the RACE questions are invalid. The judging experiment also enables us to devise a heuristic for approximating the ceiling per5733 formance on both datasets. To calculate it, we assign valid questions with a score of 1, and invalid questions with a score of 1 divided by the average number of answers considered correct across participants (where no correct answer is treated as 4 correct answers). The resulting performance ceiling is 88.8 for RACE and 97.9 for OneStopQA. The QA accuracy of our in-lab participants approaches this ceiling with 95.3 accuracy on OneStopQA versus 84.7 on RACE (p < 0.01). The combination of this outcome with the results of our Question Validity experiment suggests that the human gap from perfect 100% accuracy on RACE is due mainly to poor item quality rather than high item difficulty. These results have important implications on current machine reading evaluations. With an accuracy of 82.9% for RoBERTa and even higher performance for ensemble models reported on the RACE public leader board, it is likely that current machine reading models are very close to exhausting the space of meaningful performance improvements on this dataset. On the other hand, a more substantial room for improvement is still available for OneStopQA. 4.4 Comparing Humans and Machines Our final analysis uses the structured annotations of OneStopQA for detailed comparisons of human and machine reading comprehension behavior. In particular, the annotations enable comparing the error distributions of humans and machines. Interestingly, we observe that the Prolific QA error distribution is similar to that of RoBERTa, where B is the most common error, C is the second most common error and D is the least common error. This error frequency order is in line with the strength order design of the distractors. Further, similarly to RoBERTa, humans are only slightly affected by the difficulty level of the paragraph, although differently from RoBERTa, human performance is consistently worse on the advanced level compared to the elementary level. These results suggest deeper parallels between human and machine reading comprehension behavior than previously observed via overall accuracy comparisons. Our no-passage guessing experiment on the other hand suggests interesting differences between humans and RoBERTa. First, RoBERTa, which is specifically trained on this task, has a higher guessing performance than humans on Prolific. Further, the overlap in the questions successfully guessed by humans and by RoBERTa is fairly small: the percentage of questions correctly guessed by both humans and RoBERTa is 18% for RACE and 12% for OneStopQA. We hypothesize that these results are due at least in part to RoBERTa picking up on statistical regularities in the question and answer training data which are difficult for humans to spot at test time. The STARC annotations enable gaining further insight into the difference in the guessing strategies of humans and machines: humans have a stronger preference for D (p < .05, McNemar’s test). This outcome makes sense in the absence of the paragraph, as while the other answers are constrained by the specifics of the paragraph, D distractors may appeal to general world knowledge and reasoning which can be beyond the capacities of RoBERTa. 5 Related Work A considerable number of reading comprehension datasets have been introduced in NLP. A large fraction of these datasets can be broadly divided into three tasks: Cloze (Hermann et al., 2015; Hill et al., 2015; Bajgar et al., 2016), span identification QA (Rajpurkar et al., 2016; Nguyen et al., 2016; Trischler et al., 2017; Joshi et al., 2017; Kwiatkowski et al., 2019) and multiple choice QA (Richardson et al., 2013; Lai et al., 2017). Our approach primarily falls into the third category. The basic 4-answer format we use is identical to RACE (Lai et al., 2017), which enables training models on RACE and evaluating them on OneStopQA. Our dataset is considerably smaller than RACE, but is of appropriate size for robust evaluations and error analyses. As demonstrated in this work, OneStopQA annotations are of substantially higher quality than RACE, and enable analyses which are not possible with RACE. MCTest (Richardson et al., 2013) was created with a similar purpose to RACE, but has a low text difficulty level suitable for 7-year-olds. Span identification QA is a task in which the correct answer to the question is one or more textual spans which the reader is required to mark. This task differs from multiple choice reading comprehension in its focus on information retrieval, which limits the range of question types (e.g. forces the answers to be primarily named entities) and their difficulty level. While our approach contains span annotations, our notion of span is different from that in span identification QA: spans are not 5734 considered as answers but rather as text regions that contain the critical information for the respective answer. This difference enables a higher difficulty degree and a wider scope of question types. The combination of this approach with a multiple choice answer structure which always has a span misinterpretation distractor facilitates deeper probing of text understanding and is designed to allow for more robustness to simple pattern matching. Prior work has explored both manual and automatic auxiliary span annotations for correct answers in multiple choice QA datasets (Khashabi et al., 2018; Wang et al., 2019). Our framework extends such annotations to include multiple distractor types, with B distractors providing an additional guarantee that simply identifying the critical span is not sufficient for answering the question correctly. We further demonstrate the utility of our distractor structure for automatic verification of annotation quality through ablation experiments, as well as detailed error comparisons between human and machine readers. 6 Discussion We introduce a new annotation framework for reading comprehension and an accompanying highquality dataset. We leverage the novel structure of our annotations to develop a methodology for automatic validation of annotations and to perform detailed comparisons between human and machine reading comprehension. Our experiments further demonstrate substantial quality assurance issues with RACE, which are alleviated in our new dataset. Our results demonstrate the promise of our annotation framework and dataset in supporting a wide range of reading behavior analyses, as well as the feasibility of developing automated question validation tools for reading comprehension examinations for humans as exciting directions for future work. Acknowledgments We thank Beining Jenny Zhang, Katherine Xiao, Margarita Misirpashayeva and Theodor Cucu for contributions to preparation of OneStopQA materials and collection of human subject data. We also thank Sowmya Vajjala for assistance with OneStopEnglish. We gratefully acknowledge support from Elemental Cognition and from NSF grant IIS1815529, a Google Faculty Research Award, and a Newton Brain Science Award to RPL. References Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. 2016. Analyzing the behavior of visual question answering models. In EMNLP, pages 1955–1960. Ondrej Bajgar, Rudolf Kadlec, and Jan Kleindienst. 2016. Embracing data abundance: Booktest dataset for reading comprehension. arXiv preprint arXiv:1610.00956. Wei-Lun Chao, Hexiang Hu, and Fei Sha. 2018. Being negative but constructively: Lessons learnt from creating better visual question answering datasets. In NAACL-HLT, pages 431–441. Danqi Chen, Jason Bolton, and Christopher D Manning. 2016. A thorough examination of the CNN/Daily mail reading comprehension task. In ACL, pages 2358–2367. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In CVPR. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In NeurIPS, pages 1693–1701. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The Goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301. Allan Jabri, Armand Joulin, and Laurens Van Der Maaten. 2016. Revisiting visual question answering baselines. In ECCV, pages 727–739. Springer. Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In NAACL, pages 252–262. J Peter Kincaid, Robert P Fishburne Jr, Richard L Rogers, and Brad S Chissom. 1975. Derivation of new readability formulas (Automated Readability Index, Fog count and Flesch reading ease formula) for navy enlisted personnel. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. TACL. 5735 Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale reading comprehension dataset from examinations. In EMNLP, pages 785–794. G. Harry MC Laughlin. 1969. SMOG grading-a new readability formula. Journal of Reading, 12(8):639– 646. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human-generated machine reading comprehension dataset. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP, pages 2383–2392. Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, pages 193–203. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A machine comprehension dataset. ACL 2017, pages 191–200. Sowmya Vajjala and Ivana Luˇci´c. 2018. OneStopEnglish corpus: A new corpus for automatic readability assessment and text simplification. Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 297–304. Hai Wang, Dian Yu, Kai Sun, Jianshu Chen, Dong Yu, David McAllester, and Dan Roth. 2019. Evidence sentence extraction for machine reading comprehension. In CoNLL, pages 696–707. A OneStopQA Construction and Piloting Questions for the OneStopQA articles were written and revised in the following manner. For each article, an annotator first composed a full draft of the questions along with span annotations. A first round of revisions for all the questions was then done by a second annotator, called “reviewer”. Subsequently, the annotator and the reviewer resolved the issues that were raised by the reviewer. In order to identify problematic and guessable questions, the questions were then piloted on the crowd-sourcing platform Prolific. Each participant in the Prolific pilot read a two-paragraph practice article taken from the OneStopEnglish corpus, followed by a OneStopQA article. In each single trial of the experiment, participants answered one reading comprehension question about one paragraph. Each trial consisted of three pages. On the first page, participants were presented with a question and its four possible answers and were asked to provide their best guess of the correct answer. On the following page, they read the paragraph. On the third page, the question and the answers were presented again (without the paragraph), and participants were asked to select the correct answer based on the content of the paragraph. The two practice article questions were on a lower difficulty level compared to the OneStopQA questions, and were used to identify and exclude participants who were not performing the task adequately. We conducted the Prolific pilot using the Elementary and Advanced versions of the articles, excluding the Intermediate level articles for cost efficiency. This resulted in six possible conditions for each trial, where each condition is a pairing of one of three possible questions with one of two possible difficulty levels for the paragraph. We consequently created 6 experimental lists with trials assigned at random to one of these conditions in a Latin square design. We collected data from 96 participants per article equally distributed between the 6 lists. This corresponds to 32 participants for each question: 16 for the Elementary version of the paragraph and 16 for the Advanced version. The results of the Prolific pilot were used to inform a third round of revisions, which focused on questions which which fell under a set of criteria designed to facilitate the identification of guessable and problematic questions, as well as questions that catered to a specific difficulty level of the paragraph. The different criteria, along with their motivation are presented in Table 6. In a fourth round of revisions, the answers were edited to ensure roughly equal average lengths for the four answer types across questions. Finally, the texts, questions and answers were proofread, and the span annotations were verified. Answer Choice Rate Reading Potential Issues A > 60% pre guessable A < 50% post question/answers A > 95% post question too easy B / C / D > 30% post distractor Any > 30% |ele - adv| post question/answers may cater to one level Table 6: Criteria for targeted question editing based on per question results from a crowd-sourcing pilot on Prolific.
2020
507
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5736–5745 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5736 WinoWhy: A Deep Diagnosis of Essential Commonsense Knowledge for Answering Winograd Schema Challenge Hongming Zhang∗, Xinran Zhao∗, and Yangqiu Song Department of CSE, HKUST [email protected], [email protected], [email protected] Abstract In this paper, we present the first comprehensive categorization of essential commonsense knowledge for answering the Winograd Schema Challenge (WSC). For each of the questions, we invite annotators to first provide reasons for making correct decisions and then categorize them into six major knowledge categories. By doing so, we better understand the limitation of existing methods (i.e., what kind of knowledge cannot be effectively represented or inferred with existing methods) and shed some light on the commonsense knowledge that we need to acquire in the future for better commonsense reasoning. Moreover, to investigate whether current WSC models can understand the commonsense or they simply solve the WSC questions based on the statistical bias of the dataset, we leverage the collected reasons to develop a new task called WinoWhy, which requires models to distinguish plausible reasons from very similar but wrong reasons for all WSC questions. Experimental results prove that even though pre-trained language representation models have achieved promising progress on the original WSC dataset, they are still struggling at WinoWhy. Further experiments show that even though supervised models can achieve better performance, the performance of these models can be sensitive to the dataset distribution. WinoWhy and all codes are available at: https://github.com/ HKUST-KnowComp/WinoWhy. 1 Introduction Commonsense reasoning, as an important problem of natural language understanding, has attracted much more attention in the NLP community recently (Levesque et al., 2012; Zhou et al., 2018; Ostermann et al., 2018; Talmor et al., ∗Equal contribution. Figure 1: A pair of questions in WSC. 2019). Among all developed commonsense reasoning tasks, the Winograd Schema Challenge (WSC) (Levesque et al., 2012), which is a hard pronoun coreference resolution task, is one of the most influential ones. All questions in WSC are grouped into pairs such that paired questions have minor differences (mostly one-word difference), but reversed answers. For each question, we denote the other question in the same pair as its reverse question. One pair of the WSC task is shown in Figure 1. Based on the design guideline of WSC, all commonly used features (e.g., gender, plurality, and co-occurrence frequency) do not have any effect. Human beings can solve these questions because of their shared commonsense knowledge. For example, ordinary people can know that the pronoun ‘it’ in the first sentence refers to ‘fish’ while the one in the second sentence refers to ‘worm’ because ‘hungry’ is a common property of something eating things while ‘tasty’ is a common property of something being eaten. Conventionally, people tried to leverage crowdsourced commonsense knowledge bases (Liu et al., 2017) or search engines (Emami et al., 2018) to solve the WSC task, but performances of these models are not satisfying. Recently, pretrained language representation models (Kocijan et al., 2019; Radford et al., 2019; Liu et al., 2019) have demonstrated significant improvements in both unsupervised and supervised settings. However, as these approaches treat the concept ‘commonsense knowledge’ as a black box, we are not 5737 Figure 2: One example from the WinoWhy dataset. Plausible and implausible reasons are indicated with the tick and the crosses respectively. Resources of different reasons are shown in brackets. ‘Human Reverse’ means the human reason for the reverse question. clear about why they can do better (e.g., can these models understand commonsense or they just capture the statistical bias of the dataset) and do not know how to further improve them. To answer these two questions, in this work, we present the first deep diagnosis of essential commonsense knowledge for answering WSC questions. Specifically, we invite annotators to first provide reasons for why they choose the answers when they answer the questions, and then group all the WSC questions by different types of used commonsense knowledge (e.g., the property of entities, temporal knowledge, or spatial knowledge). By doing so, we can then analyze what kinds of commonsense knowledge can be well represented and understood by current models and more importantly, we can be clear about what kinds of commonsense knowledge are still challenging for current models, which could be an important future research direction for solving not only the WSC task but also the general commonsense reasoning problem. After the diagnosis, based on the collected reasons, we also create a new task WinoWhy, which aims at better evaluating models’ abilities to understand commonsense knowledge. For each question in the WSC task, we pair it with several reasons. Models are required to distinguish the correct reasons from all very similar but wrong candidates. From examples in Figure 2, we can see that even though all candidates are highly related to the original question, only one of them is the correct reason for resolving the coreference relation. Experimental results show that even though state-of-the-art models can achieve about 90% accuracy on the original WSC task, they are still struggling on WinoWhy questions, which shows that current models are still far away from understanding the commonsense knowledge. Moreover, by conducting experiments on both WSC and WinoWhy tasks, we prove that even though supervised models can achieve better performance, these models can be sensitive to the dataset distribution, which indicates that the improvement is probably coming from better capturing the statistical bias of the dataset rather than better understanding the required commonsense knowledge. The rest of the paper is organized as follows. In Section 2, we present the diagnosis of essential commonsense knowledge for answering WSC questions, which includes the reason collection and categorization. After that, we show how we create WinoWhy in Section 3. In Sections 4 and 5, we introduce the detailed experiments and analysis on both the original WSC and the proposed WinoWhy tasks. We introduce the related work about commonsense reasoning in Section 6. In the end, we conclude this paper with Section 7. 2 Commonsense Knowledge Diagnosis Commonsense reasoning is often viewed as one of the most challenging AI tasks and we still do not have a principled way of solving it. One important reason behind this is that, due to the vague definition of commonsense knowledge, we are not clear about what the essential knowledge types are and thus we are unclear about how to represent, acquire, and use them. As a result, we can only treat commonsense knowledge as a black box and try to learn it from limited training data. To explore a principled way of representing commonsense knowledge and solving commonsense reasoning problems, we take the Winograd Schema Challenge as the breaking point to conduct a detailed diagnosis of what kinds of knowledge are essential for answering these questions. To be specific, we first ask human beings to provide reasons why they make the correct decisions for all WSC questions. After that, we categorize these reasons by the involved knowledge types (e.g., the property of objects, temporal knowledge, or spatial knowledge). By doing so, we are more clear about how to acquire, represent, and apply such knowledge. Details are introduced as follows. 2.1 Reason Collection To collect high-quality reasons for answering all WSC questions, we employ the Amazon Mechanical Turk (MTurk) platform for our annotations and 5738 Figure 3: Reason collection interface on MTurk. Annotators are asked to provide reason(s) for all WSC questions in natural language. design a two-phase annotation procedure to collect the knowledge. In the first phase, we ask annotators to provide reasons for all WSC questions. Detailed instructions are provided such that annotators can fully understand the task. As each question may have multiple plausible reasons, for each question, we invite five annotators to provide reasons based on their own judgments. A screenshot of the survey is shown in Figure 3. As a result, we collect 1,365 reasons. As the quality of some given reasons might not be satisfying, we introduce the second round annotation to evaluate the quality of collected reasons. In the second phase, for each reason, we invite five annotators to verify whether they think the reason is reasonable or not. If at least four annotators think the reason is plausible, we will accept that reason. As a result, we identify 992 valid reasons. 2.2 Knowledge Categorization After collecting all reasons, we categorize them into different groups based on the used knowledge types. We first introduce the selected knowledge types and then introduce the detailed annotation procedure. 2.2.1 Knowledge Types A good categorization standard should have two properties: (1) Broad Coverage: it should cover most cases; (2) Exclusive: there should be clear boundaries between different categories. Following these standards, we found following two categorization methods of commonsense knowledge: 1. Conceptual Semantic Theory: According to Jackendoff’s original theory (Jackendoff, 1990), the semantics of human language can be expressed with a finite set of mental primitives and a finite set of principles of mental combination. As claimed by Jackendoff, even though the definition of mental primitives may vary based on different data or languages, some common primitives (i.e., entity, property, number, location, state, event, and activity) can be observed. These common primitives can thus be used as knowledge types for the commonsense knowledge categorization. 2. ConceptNet: As one of the most popular commonsense knowledge resources, ConceptNet 1.0 (Liu and Singh, 2004) defines 20 commonsense relations, which belong to eight categories (i.e., K-lines, Things, Agents, Events, Spatial, Causal, Functional, and Affective). In the latest version of ConceptNet (Speer et al., 2017), more relations (e.g., ‘RelatedTo’) from other resources are merged into ConceptNet. As they are relatively vague, we still follow the definition in ConceptNet 1.0 for the commonsense knowledge categorization. As there exist some overlaps between semantic primitives and categories in ConceptNet (e.g., ‘Agents’ and ‘Functional’ both describe certain properties of some objects), we first adopt all the commonly observed primitives in (Jackendoff, 1990) as the base knowledge types and then modify them based on the definition of categories from ConceptNet. For example, three primitives (activity, state, and event) and Events from ConceptNet can all be covered by the definition of Eventuality (P. D. Mourelatos, 1978). For the simplicity of the categorization and the quality of the annotation, we merge them. At the current stage, we remove ‘K-lines’ because it contains relations like ‘ConceptuallyRelatedTo’, which is relatively vague and difficult to be distinguished from other categories. Another exceptional knowledge type is ‘Causal’ from ConceptNet. During the annotation, we found out that annotators had difficulty understanding the strict definition of ‘Causal’ in ConceptNet (i.e., One event contributes to the creation of another one) and tended to annotate all reasons as ‘Causal’ because they think all reasons can somehow ‘cause’ the decision making. To make sure that all categories are easy for annotators, which are mostly ordinary people, to distin5739 Name Definition Example Property Knowledge about property of objects. ice is cold. Object Knowledge about objects. cats have ears. Eventuality Knowledge about eventualities. ‘wake up’ happens before ‘open eyes’. Spatial Knowledge about spatial position. object at the back can be blocked. Quantity Knowledge about numbers. 2 is smaller than 10. Others All other knowledge. NA Table 1: Names, definitions, and examples of selected knowledge types. Annotators are asked to select the most suitable knowledge type of each reason. If they think none of the first five categories is suitable, they are encouraged to choose ‘Others’. guish, we remove ‘Causal’. As we cannot guarantee that selected knowledge types could cover all kinds of knowledge, an additional type ‘Others’ is provided. Names, definitions, and examples of selected knowledge types are shown in Table 1. 2.2.2 Annotation For each collected valid reason, we invite annotators to select the knowledge type that can best describe the reason. Note that each reason may contain inference over multiple knowledge types. Thus, for each reason, we invite five different annotators to provide annotations. Each annotators are provided with detailed instruction of the job, descriptions of each candidate category, and examples for the category. As a result, we collect 4,960 annotations. We show the distribution of annotation results in Figure 4. From the distribution, we can see that all knowledge types are very important, especially the knowledge about objects (e.g., ‘cats have ears’) and eventualities (e.g., ‘people who give help often receive thanks later’). Besides that, we also notice that only 17% of all reason annotations (839) are ‘Others’, which indicates that the selected five categories can effectively cover 83% of the cases and thus the selected knowledge types fulfill the broad coverage requirement. We evaluate the annotation quality by average inner annotator agreement (IAA) and kappa coefficient (McHugh, 2012). We compute the IAA pair-wisely among all annotators. For each reason, if two annotators give the same knowledge type, we label it as agreed, otherwise, we label it as dis-agreed. The average IAA is 78.72%. We calculate the kappa coefficient based Figure 4: Distribution of different knowledge types. on the five raters and five categories setting and the result is 0.804. Considering that the annotation task is a multiple-choice task, such an agreement can indicate that the survey is well designed and annotators can clearly understand the task. For each WSC question, we select the most popular knowledge type among all valid reasons as the question’s major knowledge type. If multiple knowledge types have the same votes, we assume that question has multiple knowledge types. As a result, 222 questions have single knowledge type and 51 questions have multiple knowledge types. 3 WinoWhy In this section, we introduce details about the creation of WinoWhy. 3.1 Task Definition Each question in WinoWhy is defined as follows. Given a pronoun coreference resolution question and its correct answer from the original WSC data, models are asked to select all plausible reasons for making the correct prediction. WinoWhy can thus be viewed as a natural followup of the original WSC task and can help better understand models’ commonsense reasoning abilities. 3.2 Candidate Selection For each question, three kinds of candidate reasons are selected for annotators to annotate. The first reason resource is human annotation, which effectively represents how human beings solve these questions. Besides that, to collect some very similar but wrong reasons as negative examples, we consider the reasons provided by humans for the reverse question as a potential challenging wrong reason resource. Last but not least, besides reasons provided by human beings, we also lever5740 Figure 5: Distribution of reason plausibility score. The positive, acceptable, and negative reasons are denoted with the tick, confusing emoji, and cross respectively. age a strong generation model (i.e., GPT-2 (Radford et al., 2019)) to generate reasons. We provide the same questions that we showed to humans before (e.g., ‘The fish are the worm. it was hungry. It refers to fish because’) to the generation model and ask it to finish the sentences. For each question, we leverage the beam search to find the top five generated reasons. Merging all resources, we get 4,095 reasons for the next step annotation. 3.3 Annotations Similar to previous annotations, we invite annotators from Amazon Turk to help annotate whether the reasons are plausible or not. For each reason, we invite five different annotators and determine the plausibility score of each reason by voting. For example, if four out of the five annotators think one reason is plausible, its plausibility score is then 0.8. We use the same survey to annotate the plausibility of different reasons as Section 2.1. As a result, we collect 20,475 annotations. The average IAA is 91.49% and the kappa coefficient (five raters and two categories) is 0.880. 3.4 Dataset Analysis We show the distribution of annotation results in Figure 5, from which we can make the following observations. First, most of the reasons given by humans are reasonable, which fits our previous observation. Second, even though the majority of reverse reasons are not plausible, which fits our assumption, some of them do make sense. One scenario is that when the reason is comparing some property of both candidates, it can be used for both questions. For example, for the question pair “The trophy doesn’t fit into the brown suitReason Plausibility Score of the circumstances of his birth.” -C.B. 0.0/1.0 he’s the one who’s given him the money to do so. 0.2/1.0 it was Charlie who started the discussion and who encouraged Charlie to take up the challenge. 0.0/1.0 we feel grateful for the help from others 1.0/1.0 charlie is the one who get help. 0.6/1.0 Table 2: Given the sentence “Bob paid for Charlie’s college education. He is very grateful. The ‘He’ refers to Charlie because ”, the reasons generated by GPT-2 and corresponding plausibility scores. case because it is too small/large”, explanations like “Only small objects can fit into large objects” are plausible for both questions. Last but not least, not surprisingly, most of the reasons generated by GPT-2 have relatively low quality. To analyze why the reasons generated by GPT-2 are not satisfying, we show one example in Table 2. Based on the five reasons, we can find two limitations of GPT2: (1) it could generate some meaningless words (e.g., ‘-C.B.’), which could influence the overall quality significantly; (2) some of the answers are related and complete sentences by themselves, but they are not a valid reason for the question. For example, the second reason is wrong because Charlie cannot be the one who has given the money. These observations show that understanding commonsense knowledge is still a challenging task for current pre-trained language representation models like GPT-2. If at least four out of five annotators regard one reason as plausible, we label it as a positive reason. If only one or zero annotators think it is plausible, we label it as a negative reason. All others are labeled as acceptable reasons. To ensure the clear boundary between positive and negative examples in WinoWhy, only positive and negative reasons are selected to evaluate models. In total, WinoWhy contains 1,270 positive and 1,595 negative examples. 4 WSC Experiments In this section, we present the performance of current models on WSC. By doing so, we can better understand their strengths and limitations. 5741 4.1 Evaluated Methods and Implementation Recently, pre-trained language representation models have achieved significant improvement on the WSC task. In this section, we evaluate the following three models: 1. BERT (Devlin et al., 2019): As a powerful contextualized word representation model, it has been proven helpful in many downstream NLP tasks. As shown in (Kocijan et al., 2019), we can first convert the original WSC task into a token prediction task and then leverage BERT to solve the problem. We denote the base and large model of BERT as BERT (base) and BERT (large) respectively. 2. GPT-2 (Radford et al., 2019): GPT-2 is one of the best pre-trained language models for generation tasks. As reported in the original paper, we can first replace the pronouns with different candidates and leverage the probability of the full or partial sentences to make the prediction. Here we evaluate the small (117 M parameters) and the large (774 M parameters) models and denote those settings as GPT-2 (small, full), GPT-2 (small, partial), GPT-2 (large, full), and GPT-2 (large, partial) respectively. 3. RoBERTa (Liu et al., 2019): RoBERTa is a recent improved version of BERT with larger amount of training instances and techniques such as dynamic masking, which performs consistently better than BERT over many benchmark datasets. We denote the base and large models of RoBERTa as RoBERTa (base) and RoBERTa (large) respectively. Besides unsupervised models, as indicated by (Kocijan et al., 2019), fine-tuning BERT with a similar pronoun resolution dataset WSCR (Rahman and Ng, 2012) can help boost the performance. A later work (Sakaguchi et al., 2019) has further enhanced the performance by fine-tuning RoBERTa with a larger and more balanced dataset WinoGrande. Statistics of these datasets are presented in Table 3. In our experiments, we evaluate the combination of different pre-trained models and fine-tuning datasets, and denote them as BERT (base/large) + WSCR/Grande and RoBERTa (base/large) + WSCR/Grande respectively. Dataset #Problems Average Length #Vocab WSC 273 19.1 919 WSCR 1,886 15.9 4,127 WinoGrande 43,972 20.6 16,469 Table 3: Statistics of WSC and related datasets. 4.2 Result Analysis From the result in Table 4, we can make following observations: (1) Larger models perform better on all knowledge types due to their stronger semantic representation abilities; (2) The partial version of GPT-2 significantly outperforms the full version, which is consistent with the observation in (Trinh and Le, 2018) and is mainly because the influence of imbalanced distribution of candidate words are relieved by only considering the sentence probability after the pronouns. Such observation also explains why GPT-2 can outperform unsupervised BERT on WSC because models based on BERT, which rely on predicting the probability of candidate words, cannot get rid of such noise; (3) For most models, questions that require spatial knowledge are the most challenging ones. One possible explanation is that the inference over spatial knowledge is often triggered by a preposition (e.g., ‘in’ or ‘behind’), which is challenging for language representation models to remember without enough training corpus for spatial knowledge specifically; (4) Questions belonging to ‘Others’ involve more complex inference, even over multiple types of knowledge and thus most models perform poorly on that. The only exception is RoBERTa, which leverages its strong language representation ability to overcome such a challenge; (5) Fine-tuning over WinoGrande significantly boosts the performance. Besides the above analysis, we are also interested in how different models perform on questions that require complex reasoning types. Thus we divide all WSC questions based on how many knowledge types are required to solve these questions and show the result in Table 5. Based on the result, we can see that relatively small models (e.g., BERT (base) and RoBERTa (base)) perform better on questions that require single knowledge types rather than multiple knowledge types. However, for large models (e.g., BERT (large) and RoBERTa (large)), as long as the suitable fine-tuning dataset is provided, they can achieve 5742 Model Property Object Eventuality Spatial Quantity Others Overall (32) (82) (88) (64) (20) (48) (273) BERT (base) 56.25% 64.63% 50.00% 57.81% 50.00% 45.83% 56.04% BERT (large) 56.25% 62.20% 62.50% 67.19% 45.00% 52.08% 61.90% RoBERTa (base) 43.75% 51.22% 56.82% 51.56% 55.00% 39.58% 51.65% RoBERTa (large) 50.00% 51.22% 52.27% 48.44% 65.00% 56.25% 52.75% GPT-2 (small, full) 56.25% 51.22% 55.68% 51.56% 60.00% 47.92% 52.75% GPT-2 (small, partial) 43.75% 60.98% 53.41% 51.56% 60.00% 54.17% 53.48% GPT-2 (large, full) 68.75% 68.29% 61.36% 53.13% 55.00% 45.83% 59.34% GPT-2 (large, partial) 65.63% 75.61% 72.73% 62.50% 65.00% 60.42% 69.23% BERT (base) + WSCR 71.88% 64.63% 55.68% 59.38% 65.00% 45.83% 59.71% BERT (large) + WSCR 81.25% 75.61% 73.86% 67.19% 85.00% 64.58% 71.43% BERT (base) + Grande 65.63% 58.54% 60.23% 59.38% 55.00% 56.25% 60.34% BERT (large) + Grande 75.00% 70.73% 77.27% 79.69% 75.00% 68.75% 73.63% RoBERTa (base) + WSCR 62.50% 60.98% 57.95% 64.06% 55.00% 64.58% 63.00% RoBERTa (large) + WSCR 84.38% 84.15% 79.55% 76.56% 70.00% 81.25% 80.95% RoBERTa (base) + Grande 75.00% 67.07% 72.73% 75.00% 80.00% 70.83% 72.16% RoBERTa (large) + Grande 90.63% 84.15% 93.18% 84.38% 90.00% 89.58% 87.55% Table 4: Performances of different models on WSC questions. Questions are grouped by their major knowledge types. If one question contains more than one knowledge types, it will be counted in all categories. If one question contains only ‘Others’ knowledge, it will be grouped into ‘Others’. Numbers of questions are shown in brackets. Model Single Multiple (222) (51) BERT (base) 56.31% 54.90% BERT (large) 63.06% 56.86% RoBERTa (base) 53.15% 45.10% RoBERTa (large) 54.05% 47.06% GPT-2 (small, full) 51.80% 56.86% GPT-2 (small, partial) 53.48% 54.90% GPT-2 (large, full) 58.56% 62.74% GPT-2 (large, partial) 70.27% 64.71% BERT (base) + WSCR 59.91% 58.82% BERT (large) + WSCR 70.27% 76.47% BERT (base) + Grande 61.26% 56.86% BERT (large) + Grande 72.52% 78.43% RoBERTa (base) + WSCR 64.86% 54.90% RoBERTa (large) + WSCR 81.53% 78.43% RoBERTa (base) + Grande 72.97% 68.63% RoBERTa (large) + Grande 86.94% 90.20% Table 5: Performances of different models on different sets of WSC questions. Questions are grouped by the number of essential knowledge types (i.e., single or multiple). Numbers of questions are shown in brackets. similar and even better performance on the complicated questions. In general, this observation is consistent with our previous observations that large models are capable of solving complex questions from the ‘Others’ category with the support of suitable fine-tuning datasets. 5 WinoWhy Experiments In this section, we conduct experiments to investigate whether current models can understand how human beings solve WSC questions. 5.1 Unsupervised Setting Experiment Details: To evaluate whether pretrained language representation models, which achieve the state-of-the-art performance on the WSC task, can distinguish the plausible reasons against the wrong ones, following (Kocijan et al., 2019; Radford et al., 2019; Sakaguchi et al., 2019), we first connect the questions and candidate reasons into single sentences, put them into the models, and take the returned probability as the prediction. Higher probability indicates higher plausibility prediction. Best thresholds are selected for different models to calculate the final accuracy. Similar to Section 4, we evaluate BERT (base), BERT (large), GPT-2 (small), GPT-2 (large), RoBERTa (base), and RoBERTa (large) on WinoWhy. For GPT-2 models, as the partial setting has been proved more useful, we only report the performances based on the partial setting. Besides these two, we also consider BERT/RoBERTa + WSCR/Grande combinations as additional unsupervised approaches because they are not directly optimized towards the WinoWhy task. Result Analysis: Based on the results shown in Table 6, we can observe that even though pre-trained language representation models have achieved significant improvement over the original WSC task, they are still struggling on the WinoWhy task. Moreover, experimental results on different knowledge types prove that such a conclusion is universal rather than for a specific 5743 Model Property Object Eventuality Spatial Quantity Others Overall (337) (856) (928) (674) (206) (496) (2865) Majority Voting 54.30% 56.31% 56.47% 52.67% 52.43% 55.24% 55.67% BERT (base) 56.97% 56.54% 56.25% 54.01% 51.94% 55.44% 55.92% BERT (large) 56.38% 57.24% 56.14% 53.41% 51.94% 56.65% 56.13% RoBERTa (base) 54.30% 56.31% 56.90% 52.67% 52.91% 55.44% 55.78% RoBERTa (large) 54.30% 56.43% 56.47% 52.67% 52.43% 55.04% 55.67% GPT-2 (small) 56.68% 54.91% 57.11% 54.45% 59.71% 57.66% 56.37% GPT-2 (large) 57.57% 54.44% 54.42% 55.93% 54.85% 54.84% 55.77% BERT (base) + WSCR 55.49% 56.31% 56.90% 52.97% 51.94% 55.04% 55.71% BERT (large) + WSCR 56.97% 56.31% 56.79% 53.12% 52.91% 55.04% 55.99% BERT (base) + Grande 57.27% 56.43% 57.22% 53.41% 52.91% 55.24% 55.99% BERT (large) + Grande 54.90% 56.07% 56.57% 52.67% 52.91% 55.44% 55.71% RoBERTa (base) + WSCR 52.82% 55.61% 58.41% 53.26% 56.31% 55.04% 56.19% RoBERTa (large) + WSCR 54.90% 58.06% 56.90% 52.08% 52.91% 56.85% 56.23% RoBERTa (base) + Grande 56.08% 58.88% 58.19% 55.64% 57.28% 57.66% 58.05% RoBERTa (large) + Grande 56.08% 58.06% 59.59% 56.82% 56.80% 58.06% 58.18% Table 6: Performances of different models on WinoWhy questions. We report performances of different reason sets based on the required knowledge types. Reasons could belong to multiple categories as the original WSC questions could contain more than one knowledge types. Numbers of questions are shown in brackets. kind of knowledge. One possible reason is that even though the designers of WSC are trying to avoid any statistical correlation between the answer and the trigger word, such statistical correlation still exists. As a result, pre-trained language representation models can learn such correlation from large-scale training corpus and thus can answer WSC questions without fully understanding the reasons behind. Besides that, another interesting finding is that GPT-2 (large), as the best unsupervised model on WSC, performs poorly on WinoWhy. One possible explanation is that a lot of negative examples are generated with GPT-2 (large), and thus the dataset brings extra challenges for GPT-2 (large). Last but not least, we can find that fine-tuning over similar dataset (i.e., WSCR and WinoGrande) can slightly help RoBERTa, but the effect is still quite limited. This is probably because such a fine-tuning procedure only teaches pre-trained models to better answer WSC questions rather than understand the commonsense knowledge behind. 5.2 Supervised Setting Besides the unsupervised setting, we are also interested in whether a model can learn to distinguish reasons through supervised learning. 5.2.1 Experiment Details Here, we randomly divide the annotated dataset into five groups and conduct five-fold crossvalidation. We tried two different splitting methSetting Model Accuracy std Five-fold (q) Glove + LSTM 59.74% 1.04% BERT (base) 77.48% 2.06% BERT (large) 77.39% 1.54% RoBERTa (base) 75.01% 2.48% RoBERTa (large) 75.04% 1.97% GPT-2 (small) 74.48% 2.43% GPT-2 (large) 75.89% 1.35% Five-fold (r) Glove + LSTM 64.92% 1.76% BERT (base) 77.77% 1.54% BERT (large) 77.50% 2.43% RoBERTa (base) 74.41% 1.35% RoBERTa (large) 74.66% 1.75% GPT-2 (small) 76.19% 3.69% GPT-2 (large) 76.13% 4.30% Table 7: Accuracy and the standard deviation (std) results of evaluated supervised models. ods, one is based on the WSC questions and the other one is based on the reasons. We denote these two settings as Five-fold (q) and Five-fold (r) respectively. As WinoWhy can be viewed as a text classification task, we adopt the traditional encoding+classification framework and leverage a twolayer feed-forward neural network as the classification module. Seven different encoding methods (Bi-LSTM (Hochreiter and Schmidhuber, 1997), BERT (base), BERT (large), GPT-2 (small), GPT2 (large), RoBERTa (base), and RoBERTa (large)) are evaluated. For LSTM, we choose the number of layers to be two, the hidden embedding dimension to be 300, and Glove (Pennington et al., 2014) to be the word embedding. All models are trained for ten epochs. Average accuracies over folds and standard deviations are reported. 5744 5.2.2 Result Analysis The results in Table 7 demonstrate that in general, WinoWhy is a challenging task as the best supervised model can only achieve 77.77% accuracy on a two-class classification task. Besides that, we also notice that all models are getting relatively large standard deviations, especially under the ‘Five-fold (r)’ setting, which may imply that these supervised models are sensitive to the dataset distribution. Both of these observations show that training a supervised model on WinoWhy is not enough to fully understand the reasons behind WSC decisions and we may need to include reasoning over more complex knowledge to solve this challenging problem. 5.3 Discussion Based on the observations that fine-tuning over WSCR and WinoGrande can only help solve WSC rather than WinoWhy and the machine-learning based models over WinoWhy can be sensitive to the dataset distribution, it is reasonable to suspect that the improvement achieved by fine-tuning over a similar or same dataset might come from better dataset fitting rather than better commonsense reasoning. As the original purpose of proposing both WSC and WinoWhy is to evaluate how good current AI systems can understand commonsense knowledge rather than solve these questions by fitting the dataset, the unsupervised setting might be the more reasonable evaluation setting. 6 Related Work As an important knowledge resource for many artificial intelligence systems, commonsense knowledge covers various knowledge categories like causality (Sap et al., 2019), reasoning (Schubert, 2015), property (Liu and Singh, 2004), and quantity (Elazar et al., 2019), and has been proven crucial in many downstream tasks like question answering (Lin et al., 2019), dialogue system (Zhou et al., 2018), reading comprehension (Wang et al., 2018), and pronoun coreference resolution (Levesque et al., 2012). Among all these tasks, Winograd Schema Challenge (WSC) (Levesque et al., 2012) is viewed as one of the most challenging ones because solving WSC questions typically requires inference over various kinds of commonsense knowledge. Conventionally, people tried to solve WSC questions in an unsupervised way by leveraging either search engines (Emami et al., 2018), linguistic knowledge (Zhang et al., 2019, 2020), or language representation models (Kocijan et al., 2019). Experimental results showed that these models still cannot fully solve the problem but we are not clear about how to further improve them. One important reason behind this is that the conventional definition of commonsense knowledge is too vague and thus we are not clear about what kinds of knowledge are still challenging for current commonsense reasoning models. In this paper, we use the WSC task as the breaking point to conduct a deep diagnosis of essential commonsense knowledge types, which sheds some light on how to achieve a better commonsense reasoning system in the future. 7 Conclusion In this paper, we presented the first deep diagnosis of essential commonsense knowledge for answering Winograd Schema Challenge questions. By doing so, we better understand the strengths and limitations of current commonsense reasoning models. More importantly, we better know about what kinds of commonsense knowledge are required to be acquired for better commonsense reasoning. On top of the collected reasons, we develop a new task called WinoWhy, which requires models to select the plausible reasons for answering WSC questions. Experiments show that even though current models have gained significant improvement over the original WSC task, they still cannot fully understand the reasons behind. Acknowledgements This paper was supported by the Early Career Scheme (ECS, No. 26206717) from the Research Grants Council in Hong Kong and the Tencent AI Lab Rhino-Bird Focused Research Program. References Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT 2019, pages 4171–4186. Yanai Elazar, Abhijit Mahabal, Deepak Ramachandran, Tania Bedrax-Weiss, and Dan Roth. 2019. How large are lions? inducing distributions over quantitative attributes. In Proceedings of ACL 2019, pages 3973–3983. 5745 Ali Emami, Noelia De La Cruz, Adam Trischler, Kaheer Suleman, and Jackie Chi Kit Cheung. 2018. A knowledge hunting framework for common sense reasoning. In Proceedings of EMNLP 2018, pages 1949–1958. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Ray Jackendoff, editor. 1990. Semantic Structures. Cambridge, Massachusetts: MIT Press. Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, and Thomas Lukasiewicz. 2019. A surprisingly robust trick for the winograd schema challenge. In Proceedings of ACL 2019, pages 4837–4842. Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Proceedings of KR 2012. Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. Kagnet: Knowledge-aware graph networks for commonsense reasoning. In Proceedings of EMNLP-IJCNLP 2019, pages 2829–2839. Hugo Liu and Push Singh. 2004. Conceptnet: a practical commonsense reasoning tool-kit. BT technology journal, 22(4):211–226. Quan Liu, Hui Jiang, Zhen-Hua Ling, Xiaodan Zhu, Si Wei, and Yu Hu. 2017. Combing context and commonsense knowledge through neural networks for solving winograd schema problems. In Proceedings of AAAI Spring Symposia 2017. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. Mary L McHugh. 2012. Interrater reliability: the kappa statistic. Biochemia medica: Biochemia medica, 22(3):276–282. Simon Ostermann, Michael Roth, Ashutosh Modi, Stefan Thater, and Manfred Pinkal. 2018. Semeval2018 task 11: Machine comprehension using commonsense knowledge. In Proceedings of SemEval2018, pages 747–757. Alexander P. D. Mourelatos. 1978. Events, processes, and states. Linguistics and Philosophy, 2:415–434. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP 2014, pages 1532–1543. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9. Altaf Rahman and Vincent Ng. 2012. Resolving complex cases of definite pronouns: The winograd schema challenge. In Proceedings of EMNLPCoNLL 2012, pages 777–789. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Winogrande: An adversarial winograd schema challenge at scale. In Proceedings of AAAI 2020. Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: an atlas of machine commonsense for ifthen reasoning. In Proceedings of the AAAI 2019, pages 3027–3035. Lenhart K Schubert. 2015. What kinds of knowledge are needed for genuine understanding. In Proceedings of Cognitum 2015. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of AAAI 2017, pages 4444–4451. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowledge. In Proceedings of NAACL-HLT 2019, pages 4149–4158. Trieu H. Trinh and Quoc V. Le. 2018. A simple method for commonsense reasoning. CoRR, abs/1806.02847. Liang Wang, Meng Sun, Wei Zhao, Kewei Shen, and Jingming Liu. 2018. Yuanfudao at semeval2018 task 11: Three-way attention and relational knowledge for commonsense machine comprehension. arXiv preprint arXiv:1803.00191. Hongming Zhang, Hantian Ding, and Yangqiu Song. 2019. SP-10K: A large-scale evaluation set for selectional preference acquisition. In Proceedings of ACL 2019, pages 722–731. Hongming Zhang, Xin Liu, Haojie Pan, Yangqiu Song, and Cane Wing-Ki Leung. 2020. Aser: A largescale eventuality knowledge graph. In Proceedings of WWW 2020, pages 201–211. Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018. Commonsense knowledge aware conversation generation with graph attention. In Proceedings of IJCAI 2018, pages 4623–4629.
2020
508
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5746–5758 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5746 Agreement Prediction of Arguments in Cyber Argumentation for Detecting Stance Polarity and Intensity Joseph W Sirrianni, Xiaoqing “Frank” Liu, Douglas Adams University of Arkansas, Fayetteville, AR, USA. {jwsirria,frankliu,djadams}@uark.edu Abstract In online debates, users express different levels of agreement/disagreement with one another’s arguments and ideas. Often levels of agreement/disagreement are implicit in the text and must be predicted to analyze collective opinions. Existing stance detection methods predict the polarity of a post’s stance toward a topic or post, but don’t consider the stance’s degree of intensity. We introduce a new research problem, stance polarity and intensity prediction in response relationships between posts. This problem is challenging because differences in stance intensity are often subtle and require nuanced language understanding. Cyber argumentation research has shown that incorporating both stance polarity and intensity data in online debates leads to better discussion analysis. We explore five different learning models: Ridge-M regression, RidgeS regression, SVR-RF-R, pkudblab-PIP, and T-PAN-PIP for predicting stance polarity and intensity in argumentation. These models are evaluated using a new dataset for stance polarity and intensity prediction collected using a cyber argumentation platform. The SVR-RFR model performs best for prediction of stance polarity with an accuracy of 70.43% and intensity with RMSE of 0.596. This work is the first to train models for predicting a post’s stance polarity and intensity in one combined value in cyber argumentation with reasonably good accuracy. 1 Introduction Many major online and social media and networking sites, such as Facebook, Twitter, and Wikipedia, have taken over as the new public forum for people to discuss and debate issues of national and international importance. With more participants in these debates than ever before, the volume of unstructured discourse data continues to increase, and the need for automatic processing of this data is prevalent. A critical task in processing online debates is to automatically determine the different argumentative relationships between online posts in a discussion. These relationships typically consist of a stance polarity (i.e., whether a post is supporting, opposing, or is neutral toward another post) and the degree of intensity of the stance. Automatically determining these types of relationships from a given text is a goal in both stance detection and argumentation mining research. Stance detection models seek to automatically determine a text’s stance polarity (Favoring, Opposing, or Neutral) toward another text or topic based on its textual information (Mohammad et al., 2016). Likewise, argumentation mining seeks to determine the stance relationship (Supporting, Attacking, or Neutral) between argumentation components in a text (Stede and Schneider, 2018). However, in both cases, attention is only paid to the stance’s polarity, while the intensity of the relationship is often ignored. Some studies have tried to incorporate intensity into their predictions by expanding the number of classes to predict (Strongly For, For, Other, Against, and Strongly Against); however, this expansion lowered their classification performance considerably compared classification without intensity (Sobhani et al., 2015). Thus, effective incorporation of stance intensity into stance classification remains an issue. Research in Cyber Argumentation has shown that incorporating both stance polarity and intensity information into online discussions improves the analysis of discussions and the various phenomena that arise during a debate, including opinion polarization (Sirrianni et al., 2018), and identifying outlier opinions (Arvapally et al., 2017), compared to using stance polarity alone. Thus, automatically identifying both the post’s stance polarity and intensity, allows these powerful analytical models to be 5747 applied to unstructured debate data from platforms such as Twitter, Facebook, Wikipedia, comment threads, and online forums. To that end, in this paper, we introduce a new research problem, stance polarity and intensity prediction in a responsive relationship between posts, which aims to predict a text’s stance polarity and intensity which we combine into a single continuous agreement value. Given an online post A, which is replying to another online post B, we predict the stance polarity and intensity value of A towards B using A’s (and sometimes B’s) textual information. The stance polarity and intensity value is a continuous value, bounded from -1.0 to +1.0, where the value’s sign (positive, negative, or zero) corresponds to the text’s stance polarity (favoring, opposing, or neutral) and the value’s magnitude (0 to 1.0) corresponds to the text’s stance intensity. Stance polarity and intensity prediction encapsulates stance detection within its problem definition and is thus a more difficult problem to address. While stance polarity can be identified through specific keywords (e.g., “agree”, “disagree”), the intensity is a much more fuzzy concept. The difference between strong opposition and weak opposition is often expressed through subtle word choices and conversational behaviors. Thus, to accurately predict agreement intensity, a learned model must understand the nuances between word choices in the context of the discussion. We explore five machine learning models for agreement prediction, adapted from the topperforming models for stance detection: RidgeM regression, Ridge-S regression, SVR-RF-R, pkudblab-PIP, and T-PAN-PIP. These models were adapted from Mohammad et al. (2016), Sobhani et al. (2016), Mourad et al. (2018), Wei et al. (2016), and Dey et al. (2018) respectively. We evaluated these models on a new dataset for stance polarity and intensity prediction, collected over three empirical studies using our cyber argumentation platform, the Intelligent Cyber Argumentation System (ICAS) . This dataset contains over 22,000 online arguments from over 900 users discussing four important issues. In the dataset, each argument is manually annotated by their authoring user with an agreement value. Results from our empirical analysis show that the SVR-RF-R ensemble model performed the best for agreement prediction, achieving an RMSE score of 0.596 for stance polarity and intensity prediction, and an accuracy of 70% for stance detection. Further analysis revealed that the models trained for stance polarity and intensity prediction often had better accuracy for stance classification (polarity only) compared to their counterpart stance detection models. This result demonstrates that the added difficulty of detecting stance intensity does not come at the expense of detecting stance polarity. To our knowledge, this is the first time that learning models can be trained to predict an online post’s stance polarity and intensity simultaneously. The contributions of our work are the following: • We introduce a new research problem called stance polarity and intensity prediction, which seeks to predict a post’s agreement value that contains both the stance polarity (value sign) and intensity (value magnitude), toward its parent post. • We apply five machine learning models on our dataset for agreement prediction. Our empirical results reveal that an ensemble model with many hand-crafted features performed the best, with an RMSE of 0.595, and that models trained for stance polarity and intensity prediction do not lose significant performance for stance detection. 2 Related Work 2.1 Stance Detection Stance detection research has a wide interest in a variety of different application areas including opinion mining (Hasan and Ng, 2013), sentiment analysis (Mohammad, 2016), rumor veracity (Derczynski et al., 2017), and fake news detection (Lillie and Middelboe, 2019). Prior works have applied stance detection to many types of debate and discussion settings, including congressional floor debates (Burfoot et al., 2011), online forums (Hasan and Ng, 2013; Dong et al., 2017), persuasive essays (Persing and Ng, 2016), news articles (Hanselowski et al., 2018), and on social media data like Twitter (Mohammad et al., 2016). Approaches to stance detection depends on the type of text and relationship the stance is describing. For example, stance detection on Twitter often determines the author’s stance (for/against/neutral) toward a proposition or target (Mohammad et al., 2016). In this work, we adapt the features sets and models used on the SemEval 2016 stance detection task Twitter dataset (Mohammad et al., 2016). 5748 This dataset has many similarities to our data in terms of post length and topics addressed. Approaches to Twitter stance detection include SVMs (Mohammad et al., 2016; Sobhani et al., 2016; Elfardy and Diab, 2016), ensemble classifiers (Tutek et al., 2016; Mourad et al., 2018), convolutional neural networks (Igarashi et al., 2016; Vijayaraghavan et al., 2016; Wei et al., 2016), recurrent neural networks (Zarrella and Marsh, 2016; Dey et al., 2018), and deep learning approaches (Sun et al., 2018; Sobhani et al., 2019). Due to the size of the dataset, the difference in domain, and time constraints, we did not test Sun et al. (2018)’s model in this work, because we could not gather sufficient argument representation features. 2.2 Argumentation Mining Argumentation mining is applied to argumentative text to identify the major argumentative components and their relationships to one another (Stede and Schneider, 2018). While stance detection identifies the relationship between an author’s stance toward a concept or target, argumentation mining identifies relationships between arguments, similar to our task in agreement prediction. However, unlike our task, argumentation mining typically defines arguments based on argument components, instead of treating an entire post as a single argument. In argumentation mining, a single text may contain many arguments. The major tasks of argumentation mining include: 1) identify argumentative text from the nonargumentative text, 2) classify argumentation components (e.g., Major Claim, Claims, Premise, etc.) in the text, 3) determine the relationships between the different components, and 4) classify the relationships as supporting, attacking, or neutral (Lippi and Torroni, 2016). End-to-end argument mining seeks to solve all the argumentation mining tasks at once (Persing and Ng, 2016; Eger et al., 2017), but most research focuses on one or two tasks at once. The most pertinent task to this work is the fourth task (though often times this task is combined with task 3). Approaches to this task include using textual entailment suites with syntactic features (Boltuˇzi´c and ˇSnajder, 2014), or machine learning classifiers with different combinations of features including, structural and lexical features (Persing and Ng, 2016), sentiment features (Stab and Gurevych, 2017), and Topic modeling features (Nguyen and Litman, 2016). We use many of these types of features in our Ridge-S and SVR-RF-R models. 2.3 Cyber Argumentation Systems Cyber argumentation systems help facilitate and improve understanding of large-scale online discussions, compared to other platforms used for debate, such as social networking and media platforms, online forums, and chat rooms (Klein, 2011). These systems typically employ argumentation frameworks, like IBIS (Kunz and Rittel, 1970) and Toulmin’s structure of argumentation (Toulmin, 2003), to provide structure to discussions, making them easier to analyze. More specialized systems include features that improve the quality and understanding of discussions. Argumentation learning systems teach the users effective debating skills using argumentation scaffolding (Bell and Linn, 2000). More complex systems, like ICAS and the Deliberatorium (Klein, 2011), provide several integrated analytical models that identify and measure various phenomena occurring in the discussions. 3 Background 3.1 ICAS Platform Our research group has developed an intelligent cyber argumentation system, ICAS, for facilitating large scale discussions among many users (Liu et al., 2007, 2010, 2011; Chanda and Liu, 2015; Liu et al., 2012; Arvapally et al., 2017; Sirrianni et al., 2018). ICAS an updated version of the OLIAS argumentation system (Arvapally and Liu, 2013). ICAS implements an IBIS structure (Kunz and Rittel, 1970), where each discussion is organized as a tree. In ICAS, discussions are organized by issue. Issues are important problems that need to be addressed by the community. Under each issue are several positions, which act as solutions or approaches toward solving the issue. Under each position, there are several arguments that argue for or against the parent position. Under these arguments, there can be any number of follow-on arguments that argue for or against the parent argument, and so on until the discussion has ended. Figure 1 provides a visualization of the discussion tree structure ICAS employs. In ICAS, arguments have two components: a textual component and an agreement value. The textual component is the written argument the user makes. ICAS does not limit the length of argument text; however, in practice, the average argument 5749 Figure 1: An example discussion tree structure used in ICAS. The value above an argument is its agreement value. length is about 160 characters, similar to the length of a tweet. The agreement value is a numerical value that indicates the extent to which an argument agrees or disagrees with its parent. Unlike other argumentation systems, this system allows users to express partial agreement or disagreement with other posts. Users are allowed to select agreement values from a range of -1 to +1 at 0.2 increments that indicate different partial agreement values. Positive values indicate partial or complete agreement, negative values indicate partial or complete disagreement, and a value of 0 indicates indifference or neutrality. These agreement values represent each post’s stance polarity (the sign) and intensity (the magnitude). These agreement values are distinctly different from other argumentation weighting schemes where argument weights represent the strength or veracity of an argument (see (Amgoud and Ben-Naim, 2018; Levow et al., 2014)). Each agreement value is selected by the author of the argument and is a mandatory step when posting. 4 Models for Stance Polarity and Intensity Prediction This section describes the models we applied to the stance polarity and intensity prediction problem. We applied five different models, adapted from top-performing stance classification models based on their performance and approach on the SemEval 2016 stance classification Twitter dataset (Mohammad et al., 2016). 4.1 Ridge Regressions (Ridge-M and Ridge-S) Our first two models use a linear ridge regression as the underlying model. We created two ridge regression models using two feature sets. The first ridge model (Ridge-M) used the feature set described in Mohammad et al. (2016) as their benchmark. They used word 1-3 grams and character 2-5 grams as features. We filtered out English stop words, tokens that existed in more than 95% of posts, and tokens that appear in less than 0.01% of posts for word N-grams and fewer than 10% for character N-grams. There were a total of 838 N-gram features for the Ridge-M model. The second ridge model (Ridge-S) used the feature set described in Sobhani, Mohammad, and Kiritchenko’s follow-up paper (2016). In that paper, they found the sum of trained word embeddings with 100 dimensions, in addition to the N-gram features outlined by Mohammad et al. (2016), to be the best-performing feature set. We trained a word-embedding (skip-gram word2vec) model on the dataset. For each post, and summed the embeddings for each token in the post were summed up and normalized by the total number of tokens of a post to generate the word embedding features. Ridge-S had 938 total features. 4.2 Ensemble of Regressions (SVR-RF-R) This model (SRV-RF-R) consisted of an averagevoting ensemble containing three different regression models: an Epsilon-Support Vector Regression model, a Random Forest regressor, and a ridge regression model. This model is an adaption of the ensemble model presented by Mourad et al. (2018) for stance detection. Their model used a large assortment of features, including linguistic features, topic features, tweet-specific features, labeled-based features, word-Embedding features, similarity features, context features, and sentiment lexicon features. They then used the feature selection technique reliefF (Kononenko et al., 1997) to select the top 50 features for usage. Due to the changes in context (Twitter vs. Cyber Argumentation), we constructed a subset of their feature set, which included the following features1: • Linguistic Features: Word 1-3 grams as binary vectors, count vectors, and tf-idf weighted vectors. Character 1-6 grams as count vectors. POS tag 1-3 grams concatenated with their words (ex: word1 pos1 . . . ) and concatenated to the end of the post (ex: word1, word2, ... , POS1, POS2, ...). • Topic Features: Topic membership of each 1Please refer to the supplemental material for a full description of the feature set. 5750 post after LDA topic modeling (Blei et al., 2003) had run on the entire post corpus. • Word Embedding Features: The 100dimensional word embedding sums for each word in a post and the cosine similarity between the summed embedding vectors for the target post and its parent post. • Lexical Features: Sentiment lexicon features outlined in Mourad et al. (2018), excluding the DAL and NRC Hashtag Lexicons. We tested using the top 50 features selected using reliefF and reducing the feature size to 50 using Principal Component Analysis (PCA), as well as using the full feature set. We found that the full feature set (2855 total) performed significantly better than the reliefF and PCA feature sets. We used the full feature set in our final model. 4.3 pkudblab-PIP The highest performing CNN model, pkudblab, applied to the SemEval 2016 benchmark dataset, was submitted by Wei et al. (2016). Their model applied a convolutional neural network on the word embedding features of a tweet. We modified this model for agreement prediction. The resulting model’s (pkudblab-PIP) architecture is shown in Figure 2. We used pre-trained embeddings (300-dimension) published by the word2vec team (Mikolov et al., 2013). Given an input of word embeddings of size d by |s|, where d is the size of the word embedding and |s| is the normalized post length, the input was fed into a convolution layer. The convolution layer contained filters with window size (m) 3, 4, and 5 words long with 100 filters (n) each. Then the layers were passed to a max-pooling layer and finally passed through a fully-connected sigmoid layer to produce the final output value. We trained the model using a mean squared error loss function and used a 50% dropout layer after the max-pooling layer. 4.4 T-PAN-PIP The RNN model (T-PAN-PIP) is adapted from the T-PAN framework by Dey et al. (2018), which was one of the highest performing neural network models on the SemEval 2016 benchmark dataset. The T-PAN framework uses a two-phase LSTM model with attention, based on the architecture proposed by Du et al. (2017). We adapted this model for regression by making some modifications. Our Figure 2: The architecture of pkudblab-PIP for stance polarity and intensity prediction. adapted model (T-PAN-PIP) uses only a singlephase architecture, resembling Du et al.’s original design (2017), where the output is the predicted agreement value, instead of a categorical prediction. Figure 3 illustrates the architecture of T-PANPIP. It uses word embedding features (with embedding size 300) as input to two network branches. The first branch feeds the word embeddings into a bi-directional LSTM (Bi-LSTM) with 256 hidden units, which outputs the hidden states for each direction (128 hidden units each) at every time step. The other branch appends the average topic embedding from the topic text (i.e., the text of the post that the input is responding) to the input embeddings and feeds that input into a fully-connected softmax layer, to calculate what Dey et al. (2018) called the “subjectivity attention signal.” The subjectivity attention signals are a linear mapping of each input word’s target augmented embedding to a scalar value that represents the importance of each word in the input relative to the target’s text. These values serve as the attention weights that are used to scale the hidden state output of the Bi-LSTM. The weighted attention application layer combines the attention weighs to their corresponding hidden state output, as shown in (1). Q = 1 |s| |s|−1 X s=0 ashs (1) Where as is the attention signal for word s, hs is the hidden layer output of the Bi-LSTM for word s, |s| is the total number of words, and Q is the resulting attention weighted vector of size 256, the size of the output of the hidden units of the BiLISTM. The output Q feeds into a fully-connected sigmoid layer and outputs the predicted agreement value. We train the model using a mean absolute error loss function. 5751 Figure 3: The architecture of T-PAN-PIP for stance polarity and intensity prediction. 5 Empirical Dataset Description The dataset was constructed from three separate empirical studies collected in Fall 2017, Spring 2018, and Spring 2019. In each study, a class of undergraduate students in an entry-level sociology class was offered extra credit to participate in discussions in ICAS. Each student was asked to discuss four different issues relating to the content they were covering in class. The issues were: 1) Healthcare: Should individuals be required by the government to have health insurance? 2) Same Sex Adoption: Should same-sex married couples be allowed to adopt children? 3) Guns on Campus: Should students with a concealed carry permit be allowed to carry guns on campus? 4) Religion and Medicine: Should parents who believe in healing through prayer be allowed to deny medical treatment for their child? Under each issue, there were four positions (with the exception of the Healthcare issue for Fall 2017, which had only 3 positions) to discuss. The positions were constructed such that there was one strongly conservative position, one moderately conservative position, one moderately liberal position, and one strongly liberal position. The students were asked to post ten arguments under each issue. The combined dataset contains 22,606 total arguments from 904 different users. Of those arguments, 11,802 are replying to a position, and 10,804 are replying to another argument. The average depth of a reply thread tends to be shallow, with 52% of arguments on the first level (reply to position), 44% on the second level, 3% on the third level, and 1% on the remaining levels (deepest level was 5). When a student posted an argument, they were required to annotate their argument with an agreeFigure 4: A histogram of the different agreement values across all of the issues in the cyber argumentation. ment value. Overall, argument agreement values skew positive. Figure 4 displays a histogram of the agreement values for the arguments in the dataset. The annotated labels in this dataset are selflabeled, meaning that when a user replies to a post, they provide their own stance polarity and intensity label. The label is a reflection of the author’s intended stance toward a post, where the post’s text is a semantic description of that intention. While these label values are somewhat subjective, they are an accurate reflection of their author’s agreement, which we need to capture to analyze opinions in the discussion. Self-annotated datasets like this one have been used in stance detection for argumentation mining in the past (see (Boltuˇzi´c and ˇSnajder, 2014; Hasan and Ng, 2014)). 6 Empirical Study Evaluation 6.1 Agreement Prediction Problem In this study, we want to evaluate the models’ performance on the stance polarity and intensity prediction problem. We separated the dataset into training and testing sets using a 75-25 split. For the neural network models (pkudblab-PIP and TPAN-PIP), we separated out 10% of the training set as a validation set to detect over-fitting. The split was performed randomly without consideration of the discussion issue. Each issue was represented proportionally in the training and testing data sets with a maximum discrepancy of less than 1%. For evaluation, we want to see how well the regression models are able to predict the continuous agreement value for a post. We report the root-mean-squared error (RMSE) for the predicted results. 5752 6.2 Agreement Prediction Models for Stance Detection We wanted to investigate whether training models for agreement prediction would degrade their performance for stance detection. Ideally, these models should learn to identify both stance intensity without impacting their ability to identify stance polarity. To test this, we compared each model to their original stance classification models described in their source papers. Thus, ridge-H is compared with an SVM trained on the same feature set (SVMH), ridge-S is compared to a Linear-SVM trained on the same feature set (SVM-S), SVR-RF-R is compared to a majority-voting ensemble of a linearSVM, Random Forest, and Na¨ıve Bayes classifier using the same feature set (SVM-RF-NB), pkudblab-PIP is compared to the original pkudblab model trained using a softmax cross-entropy loss function, and T-PAN-PIP is compared to the original T-PAN model trained using a softmax crossentropy loss function. We trained the classification models for stance detection by converting the continuous agreement values into categorical polarity values. When converted into categorical values, all of the positive agreement values are classified as Favoring, all negative values are classified as Opposing, and zero values are classified as Neutral. In the dataset, 12,258 arguments are Favoring (54%), 8962 arguments are Opposing (40%), and 1386 arguments are Neutral (6%). To assess the stance detection performance of the models trained for agreement prediction, we converted the predicted continuous agreement values output by the models into the categorical values using the same method. For evaluation, we report both the accuracy value of the predictions and the macro-average F1-scores for the Favoring and Opposing classes on the testing set. This scoring scheme allows us to treat the Neutral category as a class that is not of interest (Mourad et al., 2018). 7 Evaluation Results 7.1 Agreement Prediction Results The results for agreement prediction are shown in Table 1. A mean prediction baseline model is shown in the table to demonstrate the difficulty associated with the problem. The neural network models perform worse than both the ridge regression and ensemble models. Ridge-S performed slightly better than Ridge-M due to the sum word Model RMSE Baseline (Mean) 0.718 Ridge-M 0.620 Ridge-S 0.615 SVR-RF-R 0.596 pkudblab-PIP 0.657 T-PAN-PIP 0.623 Table 1: The results of the regression models for the Agreement prediction task. The best result is bolded. embedding features. The best performing model was the SVR-RF-R model with an RMSE of 0.596. We performed feature analysis on the SVR-RFR model using ablation testing (i.e., removing one feature set from the model). Results showed that removing a single features set for each type of feature (Word N-grams, Character N-grams, POS N-grams, Topic features, Lexicon features, word embedding features, and cosine similarity feature) impacted the RMSE of the model by less than 0.005. Using only the N-gram features resulted in an RMSE of 0.599, which is only a 0.0047 decrease from the total. This result matches the difference between Ridge-M (only uses N-gram features) and Ridge-S (includes N-gram and word embedding features). Since the N-gram features contain most of the textual information, it had the most impact on the model, while the additional features had smaller effects on the model accuracy. 7.2 Agreement Prediction models for Stance Detection Results We compare the models trained on the agreement prediction task to their classification model counterparts in terms of performance on the stance detection task. Tables 2 and 3 show the comparison between the models in terms of accuracy and (macro) F1-score. SVR-RF-R has the best accuracy and F1-score for stance detection, which outperformed its classifier counterpart (SVM-RF-NB) by 2.12% in accuracy and +0.016 in F1-score. Three of the models trained for stance polarity and intensity prediction, SVR-RF-R, Ridge-S, and T-PAN-PIP, outperformed their classifier counterparts in accuracy by 1-2% and F1-score by +0.009 on average. Two of the models trained for stance polarity and intensity prediction, Ridge-H and pkudblab-PIP, slightly underperformed their classifier counterparts in accuracy by -0.36% and F1-score by -0.011 on average. 5753 Stance Polarity Prediction Model Polarity and Intensity Prediction Model Model Accuracy Model Accuracy Diff Baseline (Most Frequent) 54.36% Baseline (Mean) 54.36% 0.00% SVM-H 68.48% Ridge-H 68.16% -0.32% SVM-S 67.63% Ridge-S 68.84% +1.21% SVM-RF-NB 68.31% SVR-RF-R 70.43% +2.12% pkudblab 67.28% pkudblab-PIP 66.89% -0.39% T-PAN 65.55% T-PAN-PIP 66.64% +1.09% Table 2: The classification accuracy of the stance polarity prediction models and the stance polarity and intensity prediction models for Stance Detection (polarity only) classification. Stance Polarity Prediction Model Polarity and Intensity Prediction Model Model F1-Score Model F1-Score Diff Baseline (Most Frequent) 0.352 Baseline (Mean) 0.352 0.000 SVM-H 0.701 Ridge-H 0.695 -0.006 SVM-S 0.697 Ridge-S 0.703 +0.006 SVM-RF-NB 0.705 SVR-RF-R 0.721 +0.016 pkudblab 0.688 pkudblab-PIP 0.672 -0.016 T-PAN 0.673 T-PAN-PIP 0.678 +0.005 Table 3: The F1-scores of the stance polarity prediction models and the stance polarity and intensity prediction models for Stance Detection (polarity only) classification. 8 Discussion The models behaved very similarly on the agreement prediction problem, where the difference between the best performing model and the worst performing model is only 0.061. Overall, the best model received an RMSE of 0.596, which is reasonably good but can be improved. T-PAN-PIP had the worst performance, which is surprising, as it was the only model to include the parent post’s information into its prediction, which should have helped improve its performance. It is possible that its architecture is unsuitable for agreement prediction; other architectures have been deployed that include a post’s parent and ancestors into a stance prediction, which might be more suitable for agreement prediction. Future model designs should better incorporate a post’s parent information into their predictions. The difference in performance between the agreement prediction models and the classification models on the stance detection task was small and sometimes better. This demonstrates that the models learning to identify stance intensity do so without significant loss of performance in identifying stance polarity. Larger gains in performance will likely require information about the post’s author. Some post authors will state strong levels of agreement in their statements, but annotate their argument with weaker agreement levels. For example, one author wrote, “Agree completely. Government should stay out of healthcare.” and annotated that argument with an agreement value of +0.6. The authors were instructed on how to annotate their posts, but the annotations themselves were left to the post’s author’s discretion. Thus including author information into our models would likely improve the stance polarity and intensity prediction results. 9 Conclusion We introduce a new research problem called stance polarity and intensity prediction in a responsive relationship between posts, which predicts both an online post’s stance polarity and intensity value toward another post. This problem encapsulates stance detection and adds the additional difficulty of detecting subtle differences in intensity found in the text. We introduced a new large empirical dataset for agreement prediction, collected using a cyber argumentation platform. We implemented five models, adapted from top-performing stance detection models, for evaluation on the new dataset for agreement prediction. Our empirical results demonstrate that the ensemble model SVR-RF-R performed the best for agreement prediction and 5754 models trained for agreement prediction learn to differentiate between intensity values without degrading their performance for determining stance polarity. Research into this new problem of agreement prediction will allow for a more nuanced annotation and analysis of online debate. Acknowledgments We would like to acknowledge Md Mahfuzer Rahman and Najla Althuniyan for their efforts in developing the ICAS platform and planning the empirical studies. We are also grateful to the anonymous reviewers for their constructive input during the review process. References Leila Amgoud and Jonathan Ben-Naim. 2018. Weighted Bipolar Argumentation Graphs: Axioms and Semantics. In IJCAI’18 Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 5194–5198, Stockholm, Sweden. Ravi S. Arvapally, Xiaoqing Frank Liu, Fiona FuiHoon Nah, and Wei Jiang. 2017. Identifying outlier opinions in an online intelligent argumentation system. Concurrency and Computation: Practice and Experience, page e4107. Ravi Santosh Arvapally and Xiaoqing (Frank) Liu. 2013. Polarisation assessment in an intelligent argumentation system using fuzzy clustering algorithm for collaborative decision support. 4(3):181–208. Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. SENTIWORDNET 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining. In Proceedings of the Seventh International Conference on Language Resources and Evaluation, volume 10, pages 2200 – 2210, Valletta, Malta. Philip Bell and Marcia C. Linn. 2000. Scientific arguments as learning artifacts: designing for learning from the web with KIE. International Journal of Science Education, 22(8):797–817. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3(Jan):993–1022. Filip Boltuˇzi´c and Jan ˇSnajder. 2014. Back up your Stance: Recognizing Arguments in Online Discussions. In Proceedings of the First Workshop on Argumentation Mining, pages 49–58, Baltimore, Maryland. Association for Computational Linguistics. Clinton Burfoot, Steven Bird, and Timothy Baldwin. 2011. Collective Classification of Congressional Floor-debate Transcripts. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies Volume 1, HLT ’11, pages 1506–1515, Stroudsburg, PA, USA. Association for Computational Linguistics. Event-place: Portland, Oregon. N. Chanda and X. F. Liu. 2015. Intelligent analysis of software architecture rationale for collaborative software design. In 2015 International Conference on Collaboration Technologies and Systems (CTS), pages 287–294. Leon Derczynski, Kalina Bontcheva, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Arkaitz Zubiaga. 2017. SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support for rumours. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval2017), pages 69–76, Vancouver, Canada. ArXiv: 1704.05972. Kuntal Dey, Ritvik Shrivastava, and Saroj Kaushik. 2018. Topical Stance Detection for Twitter: A TwoPhase LSTM Model Using Attention. In Advances in Information Retrieval, Lecture Notes in Computer Science, pages 529–536. Springer International Publishing. Rui Dong, Yizhou Sun, Lu Wang, Yupeng Gu, and Yuan Zhong. 2017. Weakly-Guided User Stance Prediction via Joint Modeling of Content and Social Interaction. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM ’17, pages 1249–1258, New York, NY, USA. ACM. Event-place: Singapore, Singapore. Jiachen Du, Ruifeng Xu, Yulan He, and Lin Gui. 2017. Stance classification with target-specific neural attention networks. In Proceedings of the TwentySixth International Joint Conference on Artificial Intelligence. Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural End-to-End Learning for Computational Argumentation Mining. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 11–22, Vancourver, Canada. Association for Computational Linguistics. ArXiv: 1704.06104. Heba Elfardy and Mona Diab. 2016. CU-GWU Perspective at SemEval-2016 Task 6: Ideological Stance Detection in Informal Text. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 434–439, San Diego, California. Association for Computational Linguistics. Andreas Hanselowski, Avinesh PVS, Benjamin Schiller, Felix Caspelherr, Debanjan Chaudhuri, Christian M. Meyer, and Iryna Gurevych. 2018. A Retrospective Analysis of the Fake News Challenge Stance Detection Task. arXiv:1806.05180 [cs]. ArXiv: 1806.05180. 5755 Kazi Saidul Hasan and Vincent Ng. 2013. Stance Classification of Ideological Debates: Data, Models, Features, and Constraints. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 1348 – 1356. Kazi Saidul Hasan and Vincent Ng. 2014. Why are you taking this stance? identifying and classifying reasons in ideological debates. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 751–762. Association for Computational Linguistics. Minqing Hu and Bing Liu. 2004. Mining and Summarizing Customer Reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’04, pages 168–177, New York, NY, USA. ACM. Eventplace: Seattle, WA, USA. Yuki Igarashi, Hiroya Komatsu, Sosuke Kobayashi, Naoaki Okazaki, and Kentaro Inui. 2016. Tohoku at SemEval-2016 Task 6: Feature-based Model versus Convolutional Neural Network for Stance Detection. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 401–407, San Diego, California. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs]. ArXiv: 1412.6980. Mark Klein. 2011. How to Harvest Collective Wisdom on Complex Problems : An Introduction to the MIT Deliberatorium. Igor Kononenko, Edvard ˇSimec, and Marko RobnikˇSikonja. 1997. Overcoming the Myopia of Inductive Learning Algorithms with RELIEFF. Applied Intelligence, 7(1):39–55. Werner Kunz and Horst W J Rittel. 1970. Issues as elements of information systems. volume 131, Berkeley. Gina-Anne Levow, Valerie Freeman, Alena Hrynkevich, Mari Ostendorf, Richard Wright, Julian Chan, Yi Luan, and Trang Tran. 2014. Recognition of stance strength and polarity in spontaneous speech. In 2014 IEEE Spoken Language Technology Workshop (SLT), pages 236–241. ISSN: null. Anders Edelbo Lillie and Emil Refsgaard Middelboe. 2019. Fake News Detection using Stance Classification: A Survey. arXiv:1907.00181 [cs]. ArXiv: 1907.00181. Marco Lippi and Paolo Torroni. 2016. Argumentation Mining: State of the Art and Emerging Trends. ACM Trans. Internet Technol., 16(2):10:1–10:25. X. Liu, R. Wanchoo, and R. S. Arvapally. 2011. Empirical study of an intelligent argumentation system in MCDM. In 2011 International Conference on Collaboration Technologies and Systems (CTS), pages 125–133. Xiaoqing (Frank) Liu, Eric Christopher Barnes, and Juha Erik Savolainen. 2012. Conflict detection and resolution for product line design in a collaborative decision making environment. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, CSCW ’12, pages 1327–1336. ACM. Event-place: Seattle, Washington, USA. Xiaoqing (Frank) Liu, Ekta Khudkhudia, Lei Wen, Vamshi Sajja, and Ming C. Leu. 2010. An Intelligent Computational Argumentation System for Supporting Collaborative Software Development Decision Making. In Farid Meziane, Sunil Vadera, and Ivan Giannoccaro, editors, Artificial Intelligence Applications for Improved Software Engineering Development: New Prospects, Advances in Computational Intelligence and Robotics, pages 167 – 180. IGI Global. Xiaoqing (Frank) Liu, Samir Raorane, and Ming C. Leu. 2007. A web-based intelligent collaborative system for engineering design. In W. D. Li, Chris McMahon, S. K. Ong, and Andrew Y. C. Nee, editors, Collaborative Product Design and Manufacturing Methodologies and Applications, Springer Series in Advanced Manufacturing, pages 37–58. Springer London. Edward Loper and Steven Bird. 2002. NLTK: The Natural Language Toolkit. In Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics, pages 63 – 70. ArXiv: cs/0205028. Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. Andrew Kachites McCallum. 2002. MALLET: A Machine Learning for Language Toolkit. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. arXiv:1301.3781 [cs]. ArXiv: 1301.3781. Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016. SemEval-2016 Task 6: Detecting Stance in Tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31– 41, San Diego, California. Association for Computational Linguistics. 5756 Saif M. Mohammad. 2016. Sentiment Analysis: Detecting Valence, Emotions, and Other Affectual States from Text. In Herbert L. Meiselman, editor, Emotion Measurement, pages 201–237. Woodhead Publishing. Saif M. Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. NRC-Canada: Building the Stateof-the-Art in Sentiment Analysis of Tweets. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 321–327. ArXiv: 1308.6242. Sara S. Mourad, Doaa M. Shawky, Hatem A. Fayed, and Ashraf H. Badawi. 2018. Stance Detection in Tweets Using a Majority Vote Classifier. In The International Conference on Advanced Machine Learning Technologies and Applications (AMLTA2018), Advances in Intelligent Systems and Computing, pages 375–384. Springer International Publishing. Huy Nguyen and Diane Litman. 2016. Context-aware Argumentative Relation Mining. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1127–1137, Berlin, Germany. Association for Computational Linguistics. Fabian Pedregosa, Ga¨el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and ´Edouard Duchesnay. 2011. Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 12:2825–2830. Isaac Persing and Vincent Ng. 2016. End-to-End Argumentation Mining in Student Essays. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1384–1394, San Diego, California. Association for Computational Linguistics. Joseph Sirrianni, Xiaoqing (Frank) Liu, and Douglas Adams. 2018. Quantitative Modeling of Polarization in Online Intelligent Argumentation and Deliberation for Capturing Collective Intelligence. In 2018 IEEE International Conference on Cognitive Computing (ICCC), pages 57–64. Parinaz Sobhani, Diana Inkpen, and Stan Matwin. 2015. From Argumentation Mining to Stance Classification. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 67–77, Denver, CO. Association for Computational Linguistics. Parinaz Sobhani, Diana Inkpen, and Xiaodan Zhu. 2019. Exploring deep neural networks for multitarget stance detection. Computational Intelligence, 35(1):82–97. Parinaz Sobhani, Saif Mohammad, and Svetlana Kiritchenko. 2016. Detecting Stance in Tweets And Analyzing its Interaction with Sentiment. In Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics, pages 159–169, Berlin, Germany. Association for Computational Linguistics. Christian Stab and Iryna Gurevych. 2017. Parsing Argumentation Structures in Persuasive Essays. Computational Linguistics, 43(3):619–659. Manfred Stede and Jodi Schneider. 2018. Argumentation Mining, volume 11 of Synthesis Lecutres on Human Langauge Technologies. Morgan & Claypool Publishers. Qingying Sun, Zhongqing Wang, Qiaoming Zhu, and Guodong Zhou. 2018. Stance Detection with Hierarchical Attention Network. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2399–2409, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Stephen E. Toulmin. 2003. The Uses of Argument. Cambridge University Press, Cambridge. GoogleBooks-ID: 8UYgegaB1S0C. Martin Tutek, Ivan Sekulic, Paula Gombar, Ivan Paljak, Filip Culinovic, Filip Boltuzic, Mladen Karan, Domagoj Alagi´c, and Jan ˇSnajder. 2016. TakeLab at SemEval-2016 Task 6: Stance Classification in Tweets Using a Genetic Algorithm Based Ensemble. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 464–468, San Diego, California. Association for Computational Linguistics. Prashanth Vijayaraghavan, Ivan Sysoev, Soroush Vosoughi, and Deb Roy. 2016. DeepStance at SemEval-2016 Task 6: Detecting Stance in Tweets Using Character and Word-Level CNNs. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 425–431, San Diego, California. ArXiv: 1606.05694. Wan Wei, Xiao Zhang, Xuqin Liu, Wei Chen, and Tengjiao Wang. 2016. pkudblab at SemEval-2016 Task 6 : A Specific Convolutional Neural Network System for Effective Stance Detection. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 384–388, San Diego, California. Association for Computational Linguistics. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing Contextual Polarity in PhraseLevel Sentiment Analysis. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 347–354. Guido Zarrella and Amy Marsh. 2016. MITRE at SemEval-2016 Task 6: Transfer Learning for Stance 5757 Detection. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 458 – 463, San Diego, California. ArXiv: 1606.03784. A Appendices A.1 Extended Model Description The following sections give a more detailed description for some of the models used in our research. The models were written using the Sci-kit learn (Pedregosa et al., 2011) and TensorFlow libraries (Mart´ın Abadi et al., 2015). A.1.1 SVR-RF-R Feature Set Description The SVR-RF-R model used a total of 2855 features. They are listed below. Linguistic Features: • 1-3 word grams as binary vectors, count vectors, and tf-idf weighted vectors. Word grams must appear in at least 1% of posts and no more than 95% of posts. • 1-6 character grams as count vectors. Character grams must appear in at least 10% of posts and no more than 95% of posts. • 1-3 Part-Of-Speech grams as count vectors. The Part-Of-Speech tags were generated using the NLTK library (Loper and Bird, 2002). The POS tags were used in two formats, with the tags concatenated to their corresponding word (e.g. word1 POS1 word2 POS2 ...) and with the POS tags appended to the end of the sen-tence (e.g. word1 word2 ... word N POS1 POS2 . . . POS N). Topic Features: • Topic membership of each post. LDA topic modeling was run on the entire dataset. Different numbers of topics were tested and their performance was judged using silhouette score. The best performing model had two topics. Word Embedding Features: • 100-dimensional word embedding sums for each post. The word embeddings were trained using MALLET (McCallum, 2002). Similarity Features: • The cosine similarity between the summed word embeddings for the target post and its parent post. Lexical Features: • The ratio of positive words to all words, ratio of negative words to all words, sum count of posi-tive words, sum count of negative words, and the positive and negative count for each POS tag for the MPQA (Wilson et al., 2005) and SentiWordNet (Baccianella et al., 2010) lexicons. • The ratio of positive words to all words, ratio of negative words to all words, sum count of positive words, sum count of negative words for the Hu Liu Lexicon (Hu and Liu, 2004). • The sum score, maximum score, positive sum, and negative sum for sentiment tokens from the NRC lexicon (Mohammad et al., 2013). In their original paper, Mourad et al. (2018), used the reliefF (Kononenko et al., 1997) features selection technique to select the 50 most important features. We tested using the top 50 features selected using reliefF and reducing the feature size to 50 using Principal Component Analysis (PCA), as well as using the full fea-ture set. We found that the full feature set (2855 total) performed significantly better than the reliefF and PCA feature sets. We used the full feature set in our final model. A.1.2 pkudblab-PIP Training The pkudblab-PIP model used the following input sizes: • Word Embedding Size (d): 300. • Maximum Sentence Length (|s|): 150. Posts longer than 150 words were truncated from the beginning and posts less than 150 words were padded at the end. • Total number of filters: 300. 100 for each window size: 3, 4, and 5. The model was trained using a batch size of 64, a drop-out rate of 50%, and used an Adam optimizer (Kingma and Ba, 2014). A.1.3 T-PAN-PIP Training The T-PAN-PIP model used the following input sizes: • Word Embedding Size (d): 300. 5758 • Maximum Sentence Length (|s|): 150. Posts longer than 150 words were truncated from the beginning and posts less than 150 words were padded at the end. • LSTM hidden units: 256 total (128 for each direction). The model was trained using a batch size of 64 and used an Adam optimizer.
2020
509
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 538–555 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 538 Simple, Interpretable and Stable Method for Detecting Words with Usage Change across Corpora Hila Gonen∗1 Ganesh Jawahar∗2 Djam´e Seddah2 Yoav Goldberg1,3 1Department of Computer Science, Bar-Ilan University 2Inria Paris 3Allen Institute for Artificial Intelligence [email protected] [email protected] [email protected] [email protected] Abstract The problem of comparing two bodies of text and searching for words that differ in their usage between them arises often in digital humanities and computational social science. This is commonly approached by training word embeddings on each corpus, aligning the vector spaces, and looking for words whose cosine distance in the aligned space is large. However, these methods often require extensive filtering of the vocabulary to perform well, and—as we show in this work—result in unstable, and hence less reliable, results. We propose an alternative approach that does not use vector space alignment, and instead considers the neighbors of each word. The method is simple, interpretable and stable. We demonstrate its effectiveness in 9 different setups, considering different corpus splitting criteria (age, gender and profession of tweet authors, time of tweet) and different languages (English, French and Hebrew). 1 Introduction Analyzing differences in corpora from different sources (different time periods, populations, geographic regions, news outlets, etc) is a central use case in digital humanities and computational social science. A particular methodology is to identify individual words that are used differently in the different corpora. This includes words that have their meaning changed over time periods (Kim et al., 2014; Kulkarni et al., 2015; Hamilton et al., 2016b; Kutuzov et al., 2018; Tahmasebi et al., 2018), and words that are used differently by different populations (Azarbonyad et al., 2017; Rudolph et al., 2017). It is thus desired to have an automatic, robust and simple method for detecting such potential changes in word usage and surfacing them for human analysis. In this work we present such a method. ∗Equal contribution. A popular method for performing the task (§4) is to train word embeddings on each corpus and then to project one space to the other using a vectorspace alignment algorithm. Then, distances between a word-form to itself in the aligned space are used as an estimation of word usage change (Hamilton et al., 2016b). We show that the common alignment-based approach is unstable, and hence less reliable for the usage change detection task (§3,§7). In addition, it is also sensitive to proper nouns and requires filtering them. We propose a new and simple method for detecting usage change, that does not involve vector space alignment (§5). Instead of trying to align two different vector spaces, we propose to work directly in the shared vocabulary space: we take the neighbors of a word in a vector space to reflect its usage, and consider words that have drastically different neighbours in the spaces induced by the different corpora to be words subjected to usage change. The intuition behind this approach is that words that are used significantly differently across corpora are expected to have different contexts and thus to have only few neighboring words in common. In order to determine the extent of the usage change of a word, we simply consider its top-k neighbors in each of the two corpora, and compute the size of the intersection of the two lists. The smaller the intersection is, the bigger we expect the change to be. The words are ranked accordingly. The advantages of our method are the following: 1. Simplicity: the method is extremely simple to implement and apply, with no need for space alignment, hyperparameter tuning, and vocabulary filtering, except for simple frequency cutoffs. 2. Stability: Our method is stable, outputting similar results across different word embed539 dings trained on the same corpora, in contrast to the alignment-based approach. 3. Interpretability: The ranking produced by our method is very intuitive to analyze. Looking at the neighborhood of a word in the two corpora reveals both the meaning of the word in each, and the extent to which the word has changed. 4. Locality: The interpretability aspect is closely linked to the locality of the decision. In our approach, the score of each word is determined only by its own neighbours in each of the spaces. In contrast, in the projection based method the similarity of a pair of words after the projection depends on the projection process, which implicitly takes into account all the other words in both spaces and their relations to each other, as well as the projection lexicon itself, and the projection algorithm. This makes the algorithmic predictions of the projection-based methods opaque and practically impossible to reason about. We demonstrate the applicability and robustness of the proposed method (§7) by performing a series of experiments in which we use it to identify word usage changes in a variety of corpus pairs, reflecting different data division criteria. We also demonstrate the cross-linguistic applicability of the method by successfully applying it to two additional languages beyond English: French (a Romance language) and Hebrew (a Semitic language). We argue that future work on detecting word change should use our method as an alternative to the now dominant projection-based method. To this end, we provide a toolkit for detecting and visualizing word usage change across corpora.1 2 Task Definition Our aim is to analyze differences between corpora by detecting words that are used differently across them. This task is often referred to as “detecting meaning change” (Azarbonyad et al., 2017; Del Tredici et al., 2019). However, we find the name “meaning change” to be misleading. Words may have several meanings in the different corpora, but different dominant sense in each corpus, indicating different use of the 1https://github.com/gonenhila/usage_ change word. For this reason, we refer to this task as “detecting usage change”. We define our task as follows: given two corpora with substantial overlapping vocabularies, identify words that their predominant use is different in the two corpora. The algorithm should return a ranked list of words, from the candidate that is most likely to have undergone usage-change, to the least likely. Since the primary use of such algorithm is corpus-based research, we expect a human to manually verify the results. To this end, while the method does not need to be completely accurate, it is desirable that most of the top returned words are indeed those that underwent change, and it is also desirable to provide explanations or interpretations as to the usage of the word in each corpus. Lastly, as humans are susceptible to be convinced by algorithms, we prefer algorithms that reflect real trends in the data and not accidental changes in environmental conditions. 3 Stability A desired property of an analysis method is stability: when applied several times with slightly different conditions, we expect the method to return the same, or very similar, results. Insignificant changes in the initial conditions should result in insignificant changes in the output. This increases the likelihood that the uncovered effects are real and not just artifacts of the initial conditions. Recent works question the stability of word embedding algorithms, demonstrating that different training runs produce different results, especially with small underlying datasets. Antoniak and Mimno (2018) focuses on the cosine-similarity between words in the learned embedding space, showing large variability upon minor manipulations on the corpus. Wendlandt et al. (2018) make a similar argument, showing that word embeddings are unstable by looking at the 10-nearest neighbors (NN) of a word across the different embeddings, and showing that larger lists of nearest neighbors are generally more stable. In this work, we are concerned with the stability of usage-change detection algorithms, and present a metric for measuring this stability. A usagechange detection algorithm takes as input two corpora, and returns a ranked list r of candidate words, sorted from the most likely to have changed to the least likely. For a stable algorithm, we expect different runs to return similar lists. While we do not 540 care about the exact position of a word within a list, we do care about the composition of words at the top of the list. We thus propose a measure we call intersection@k, measuring the percentage of shared words in the the top-k predictions of both outputs: intersection@k(r1, r2) = |rk 1 ∩rk 2| k (1) where r1 and r2 are the two ranked lists, and rk i is the set of top k ranked words in ranking ri. A value of 0 in this measure means that there are no words in the intersection, which indicates high level of variability in the results, while a value of 1 means that all the words are in the intersection, indicating that the results are fully consistent. We expect to see higher intersection@k as k grows. This expectation is confirmed by our experiments in Section 7.2. We measure the stability of the usage-change detection algorithms with respect to a change in the underlying word embeddings: we apply the intersection@k metric to two runs of the usagechange detection algorithm on the same corpuspair, where each run is based on a different run of the underlying word embedding algorithm. 4 The Predominant Approach The most prominent method for detecting usage change is that of Hamilton et al. (2016b), originally applied to detect shifts in dominant word senses across time. It is still the predominant approach in practice,2 with recent works building upon it (Yao et al., 2018; Rudolph and Blei, 2018). This method was also shown to be the best performing one among several others (Schlechtweg et al., 2019). It works by training word embeddings on the two corpora, aligning the spaces, and then ranking the words by the cosine-distance between their representations in the two spaces, where large distance is expected to indicate significant change in meaning. We refer to this method as AlignCos. The alignment is performed by finding an orthogonal linear transformation Q that, when given matrices X and Y , projects X to Y while minimizng the squared loss: Q = arg min Q ||QX −Y ||2, s.t. Q is orthogonal 2This is also indicated by the large number of citations: 350 according to Google Scholar. The rows of X correspond to embeddings of words in space A, while the rows of Y are the corresponding embeddings in space B. This optimization is solved using the Orthogonal Procrustes (OP) method (Sch¨onemann, 1966), that provides a closed form solution. Vector space alignment methods are extensively studied also outside of the area of detecting word change, primarily for aligning embedding spaces across language pairs (Xing et al., 2015; Artetxe et al., 2018b; Lample et al., 2018a; Artetxe et al., 2018a). Also there, the Orthogonal Procrustes method is taken to be a top contender (Lample et al., 2018b; Kementchedjhieva et al., 2018). 4.1 Shortcomings of the alignment approach Self-contradicting objective. Note that the optimization procedure in the (linear) alignment stage attempts to project each word to itself. This includes words that changed usage, and which therefore should not be near each other in the space. While one may hope that other words and the linearity constraints will intervene, the method may succeed, by mistake, to project words that did change usage next to each other, at the expense of projecting words that did not change usage further apart than they should be. This is an inherent problem with any alignment based method that attempts to project the entire vocabulary onto itself. Requires non-trivial filtering to work well. In addition, the alignment-based method requires nontrivial vocabulary filtering to work well. For example, Hamilton et al. (2016b) extensively filter proper nouns. Indeed, without such filtering, proper-nouns dominate the top of the changed words list. This does not indicate real word usage change, but is an artifact of names being hard to map across embedding spaces. In that respect, it makes sense to filter proper nouns. However, some cases of word usage change do involve names. For example, the word “Harlem”, which is used as either a name of a neighborhood in NY or as a name of a basketball team, was detected by our method as a word whose usage changed between tweets of celebrities with different occupations (§7.1). Not stable across runs. As we discuss in Section 3 and show in Section 7.2, the approach is not very stable with respect to different random seeds in the embeddings algorithm. 541 Age Gender Occupation Day-of-week Hebrew French young older male female creator sports performer weekday weekend 2014 2018 2014 2018 #words 58M 116M 293M 126M 87M 132M 126M 142M 114M 42M 155M 867M 1B #tweets 5M 8M 23M 9M 6M 11M 10M 9M 7M 4M 13M 82M 104M #vocab 42K 73K 114K 69K 63K 66K 69K 81K 72K 84K 187K 263K 350K Table 1: Statistics of the different splits. 5 Nearest Neighbors as a Proxy for Meaning Rather than attempting to project two embedding spaces into a shared space (which may not even map 1:1), we propose to work at the shared vocabulary space. The underlying intuition is that words whose usage changed are likely to be interchangeable with different sets of words, and thus to have different neighbors in the two embedding spaces. This gives rise to a simple and effective algorithm: we represent each word in a corpus as the set of its top k nearest neighbors (NN). We then compute the score for word usage change across corpora by considering the size of the intersection of the two sets (not to be confused with intersection@k defined in Section 3): scorek(w) = −|NNk 1 (w) ∩NNk 2 (w)| (2) where NNk i (w) is the set of k-nearest neighbors of word w in space i. Words with a smaller intersection are ranked higher as their meaning-change potential. We only consider the words in the intersection of both vocabularies, as words that are rare in one of the corpora are easy to spot using the frequency in the two spaces, and do not neatly fit the definition of usage change. Note that our method does not require extensive filtering of words – we only filter words based on their frequency in the corpus3. We use a large value of k = 10004 in practice, because large neighbor sets are more stable than small ones (Wendlandt et al., 2018), leading to improved stability for our algorithm as well. Limitations Similar to previous methods, our method assumes high quality embeddings, and 3For English experiments we also filter stopwords according to the predefined list from NLTK. 4While this value may seem arbitrary, we tested several values in that range which yielded very similar results. However, the appropriate range may change when used with smaller corpora, or substantially different vocabulary sizes. We consider k to be the only hyperparameter of our method, and note that it is rather easy to set. hence also a relatively large corpus. Indeed, in many cases we can expect large quantities of data to be available to the user, especially when considering the fact that the data needed is raw text rather than labeled text. Using a limited amount of data results in lower quality embeddings, but also with smaller vocabulary size, which might affect our method. For high-quality embeddings with small vocabulary sizes, we believe that changing k accordingly should suffice. Naturally, results will likely degrade as embeddings quality deteriorate. It is also important to note that, like previous approaches, our method does not attempt to provide any guarantees that the detected words have indeed undergone usage change. It is only intended to propose and highlight candidates for such words. These candidates are meant to later be verified by a user who needs to interpret the results in light of their hypothesis and familiarity with the domain. Unlike previous methods, as we discuss in Section 7.4, our method also provides intuitive means to aid in such an interpretation process. 6 Experimental Setup We compare our proposed method (NN) to the method of Hamilton et al. (2016b) described in Section 4 (AlignCos), in which the vector spaces are first aligned using the OP algorithm, and then words are ranked according to the cosine-distance between the word representation in the two spaces.5 This method was shown to outperform all others that were compared to it by Schlechtweg et al. (2019). We demonstrate our approach by using it to detect change in word usage in different scenarios. We use the following corpora, whose statistics are listed in Table 1. We consider three demographics-based distinctions (age, gender, occupation), a day-of-week 5Some extensions may yield improved results (filtering out proper names, as done in Hamilton et al. (2016b), or jointly learning and aligning the spaces (Bamler and Mandt, 2017; Rudolph et al., 2017; Rudolph and Blei, 2018; Yao et al., 2018), but we stick to this setting as it is the most general out of this line of work, and the one most commonly used in practice, for which an open implementation is available. 542 based distinction, and short-term (4y) diachronic distinctions. We also compare to the longer-term (90y) diachronic setup of Hamilton et al. (2016b), which is based on Google books. Author Demographics The Celebrity Profiling corpus (Wiegmann et al., 2019) consists of tweets from celebrities along with their traits such as age, gender and occupation. Based on these labels, we create the following splits: (1) Age: Young (birthyear 1990–2009) vs. Older (birthyear 1950– 1969); (2) Gender: Male vs. Female; (3) Occupation: pairwise splits with Performer, Sports and Creator. Day-of-week Yang and Leskovec (2011) collect 580 million tweets in English from June 2009 to February 2010, along with their time-stamps. As this is a fairly large corpus, we consider the tweets of a single month (November 2009). We create a split based on the Day-of-Week: weekday (tweets created on Tuesday and Wednesday) vs. weekend (tweets created on Saturday and Sunday). We remove duplicated tweets, as preliminary experiments revealed odd behavior of the representations due to heavily duplicated spam tweets. French Diachronic (4y, tweets) Abitbol et al. (2018) compile a collection of tweets in French between the years 2014 and 2018. The authors utilize several heuristics based on the users’ spatial information to consider tweets from users based in French territory only. We use the 2014 and 2018 portions of the data, and create a split accordingly. Hebrew Diachronic (4y, tweets) The Hebrew data we use is taken from a collection of Hebrew tweets we collected for several consecutive years, up to 2018. The collection was performed by using the streaming API and filtering for tweets containing at least one of the top 400 most frequent Hebrew words. We use the 2014 and 2018 portions of the data, and create a split accordingly. English Diachronic (90y, books) For diachronic study on English corpora, we make use of the embeddings trained on Fiction from Google Books (Davies, 2015) provided by the authors of Hamilton et al. (2016b), specifically for the two years, 1900 and 1990. These embeddings are originally aligned using Orthogonal Procrustes and the words whose relative frequencies are above 10−5 in both the time periods are ranked using cosine distance. 6.1 Implementation details Tokenization and Word Embeddings We use 300 dimensions word2vec vectors with 4 words context window. Further details of embeddings algorithm and tokenization are available in the appendix. Vocabulary and Filtering We perform frequency-based filtering of the vocabulary, removing stop words (the most frequent 200 words for each corpus, as well as English stop words as defined in nltk6), as well as low frequency words (we discard the 20% least frequent words in each corpus, and require a minimum of 200 occurrences). Notably, we do not perform any other form of filtering, and keep proper-nouns and person-names intact. We consider neighbors having a raw frequency greater than 100 and identify 1000 such nearest neighbors (k =1000) to perform the intersection. 7 Results 7.1 Qualitative Evaluation: Detected Words We run our proposed method and AlignCos (Hamilton et al., 2016b) on the different scenarios described in Section 6, and manually inspect the results. While somewhat subjective, we believe that the consistent success on a broad setting, much larger than explored in any earlier work, is convincing. We provide examples for two of the setups (English Diachronic and Performer vs. Sports), with the rest of the setups in the appendix. For each one, we list a few interesting words detected by the method, accompanied by a brief explanation (according to the neighbors in each corpus). In addition, we depict the top-10 words our method yields for the Age split (Table 2), accompanied by the nearest neighbors in each corpus (excluding words in the intersection), to better understand the context. For comparison, we also mention the top-10 words according to the AlignCos method. Similar tables for the other splits are provided in the Appendix. Across all splits, our method is able to detect high quality words as words that undergo usage change, most of them easily explained by their neighboring words in the two corpora. As expected, we see that the AlignCos method (Hamilton et al., 6https://www.nltk.org/ 543 AGE (YOUNG VS. OLDER) NN neighbors in each corpus dem dese, yuh, them, nuh, dey, ayye, dats, tha, betta, fuk repub, democrats, centrist, manchin, primaries, party’s, alp, dfl, gopers, repubs dam damm, mannnnn, mannnn, mane, huh, ahh, oo, buggin, koo, mannn dams, basin, river, dredging, reservoir, drainage, wastewater, sewerage, refinery, canal rep reppin, wear, allegiance, all-american, wildcat, alumni, tryout, hoosier, recruit, ua sen., congresswoman, chairwoman, co-chairs, gazelka, salazar, amb, comptroller, staffer, cong assist points, shutout, scoresheet, scored, pts, hatrick, sheet, nil, sacks, assisting, contact, coordinate, locating, coordinating, administer, equip, consular, deploy, locate pr cameron, -pr, erik, lap, sargeant, laps, tundra, teamjev, caution, restart stunt, puerto, promotional, rico, creative, ploy, hire, spin, freelance, fema fr frfr, forreal, foreal, lmaooo, madd, tho, bck, bruhh, lmao, fwm pavone, incl, from, wrk, ger, joseph, covey, env, w, ans joint jawn, fusion, scorpion, sumn, spot, db, cb, joints, mgmt, fye high-level, convened, minsk, two-day, bilateral, counter-terrorism, delegations, asean, convene, liaison mega , fantastic, simulator, macau, lotus, fuji, bmw, awesome, mclaren, fab gujarat, becos, multi-billion, gta, rupees, dollar, maharashtra, major, crores, multi-million flow beard, vibin, jeezy, drizzy, lite, mohawk, dreads, sauna, boomin, vibe illicit, influx, accumulation, moisture, absorb, overwhelm, heart’s, drains, curtail, diverting icymi superintendent, bureau, commissioner, spokesman, exec, state’s, prosecutor, reuters, montgomery, conway re-upping, reichert, newsmakers, sherrod, column, arizona’s, otl, holcomb, rundown, wrap-up AlignCos top-10 leo, whip, savage, nd, cole, pb, ace, carter, fr, bb Table 2: Top-10 detected words from our method (NN) vs. AlignCos method (last row), for corpus split according to the age of the tweet-author. Each word from our method is accompanied by its top-10 neighbors in each of the two corpora (Young vs. Older). 2016b) is highly sensitive to names, featuring many in the top-10 lists across the different splits. As opposed to AlignCos, our method is robust to global changes in the embedding space, since it looks at many neighbors. As a result, it is not sensitive to groups of words that “move together” in the embedding space (which might be the case with names). English (diachronic, 90y) Top-100 words identified by our method cover all the words attested as real semantic shift in Hamilton et al. (2016b)’s top-10 except the word ‘wanting’. Specifically, three attested words, ‘gay’, ‘major’ and ‘check’ are present in our top-10, which also has more interesting words not present in Hamilton et al. (2016b)’s top-10 (1900 vs. 1990): van (captain vs. vehicle), press (printing vs. places), oxford (location vs. university). In addition, interesting words that came up in the top-30 list are the following: headed (body part vs. move in a direction), mystery (difficulty in understanding vs. book genre). Occupation (performer vs. sports) Interesting words found at the top-10 list are the following: cc (carbon copy vs. country club), duo (duet vs. pair of people), wing (politics vs. football player position). In addition, interesting words that came up in the top-30 list are the following: jazz (music genre vs. basketball team), worlds (general meaning vs. championships), stages (platforms vs. company(bikes)), record (music record vs. achievement), harlem (neighborhood vs. basketball team). 7.2 Quantitative Evaluation: Stability We compare the stability of our method to that of the AlignCos method (Hamilton et al., 2016b) using the intersection@k metric, as defined in Section 3. We use k ∈10, 20, 50, 100, 200, 500, 1000. In Figure 1(a) we plot the intersection@k for different values of k for all splits, with solid lines for the results of our method and dashed lines for the results of AlignCos method. It is clear that our method is significantly more stable, for all k values and across all splits. To better understand the parameters that affect the stability of the different methods, we also examine how the intersection changes with different values of frequency cut-off. In Figure 1(b) we plot intersection@100 as a function of the frequency cut-off (minimum word occurrences required for a word to be included in the ranking). Here, our method is again more stable for all corpus splits. In addition, our method is similarly stable, regardless the frequency cut-off, unlike the AlignCos method. We also examine how the size of NN lists considered for the intersection 544 0 200 400 600 800 1000 k 0.5 0.6 0.7 0.8 0.9 intersection@k (a) Change in intersection@k w.r.t k. 0 200 400 600 800 1000 1200 1400 minimum occurrences of a word 0.6 0.7 0.8 0.9 intersection@100 (b) Change in intersection@100 w.r.t word frequency cut-off. 200 400 600 800 1000 1200 1400 number of neighbors 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 intersection@100 (c) Change in intersection@100 w.r.t number of neighbors to consider. 200 400 600 800 1000 1200 1400 number of neighbors 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 intersection@100 NN Age AlignCos Age NN Gender AlignCos Gender NN Hebrew AlignCos Hebrew NN Occupation (Performer vs Sports) AlignCos Occupation (Performer vs Sports) NN Time of week AlignCos Time of week NN Occupation (Creator vs Sports) AlignCos Occupation (Creator vs Sports) NN Occupation (Creator vs Performer) AlignCos Occupation (Creator vs Performer) NN French AlignCos French Figure 1: Stability plots. Solid lines: our method, dashed lines: AlignCos method. affects the stability. In Figure 1(c) we plot the intersection@100 against number of neighbors taken into consideration using our method. We get that from around k = 250, our method is substantially more stable for all splits. 7.3 Quantitative Evaluation: DURel and SURel datasets This field of semantic change suffers from lack of proper evaluation datasets, and there is no common benchmark that is being used. Two new datasets were recently introduced, and used to extensively compare between previous methods (Schlechtweg et al., 2019): the DURel dataset (Schlechtweg et al., 2018) focuses on diachronic changes, while the SURel dataset (H¨atty et al., 2019) focuses on domain-based semantic changes. We use them to verify the quality of our results and compare against AlignCos (Hamilton et al., 2016b). Both datasets include a limited number of German words, along with human annotations of the degrees of semantic relatedness between contexts of the words (across the different texts). However, they are not ideal as they are extremely limited (22 words each)7. Evaluation Metrics Spearman correlation is the standard measure used in this field to compare between methods with respect to gold rankings. However, it is extremely important to note its limitations in this setting, since comparing to a very small gold ranking might be tricky. Specifically, it does 7For our experiments, we follow the setup of Schlechtweg et al. (2019) and use 19/21 words for DURel/ SURel respectively. method measure SURel DURel AlignCos spearman 0.800 0.814 NN spearman 0.859 0.59 AlignCos DCG -4.5 -4.31 NN DCG -4.54 -4.3 Table 3: Results on DURel and SURel with NN and with AlignCos. not take into account the global ranking of each method, but only the relative position of each of the gold words in each method’s ranking. For example, a method that ranks all the gold words at the bottom of the ranking (out of all the words in the vocabulary) in the same order, would be considered perfect, even though it is clearly not the case. As a possible solution for this problem, we suggest to use Discounted Cumulative Gain (DCG), which better captures also global rankings. As opposed to Spearman, this measure takes into account not only the order of the words, but also their actual scores: DCG(M) = X w∈W GoldScore(w) log2(rankM(w) + 1) (3) where W are the words in the gold dataset, and M is the model being evaluated. We report the results in Table 3. We compute AlignCos results with the best parameters reported in Schlechtweg et al. (2019)8. Our method outperforms AlignCos on SURel, both when measur8We were unable to reproduce the exact results from the paper: spearman correlation of 0.866 and 0.851 on SURel and DURel, respectively. 545 40 20 0 20 40 60 40 20 0 20 40 60 clutch clutch anarkali aquamarine baaad backless badddd beaded bejeweled bodice brocade bustier carlazampatti cross-body crossbody ctsy dominant drawstring dress dressearings ebury embellishments emeralds fringed game-winning georgiana gingham gown handbag houndstooth houseofcb ikat jankowski katespadeny layup lehenga lipsy maisonvalentino missguided monogrammed mookie multicolored niiiiiiice organza peplum perspex pleated prabal quilted scanlantheodore sequin sequinned shirred shirtdress shoesslouchy tassel threes walkoff wantthattrend worldmcqueen (a) Female space 10 5 0 5 10 15 20 5 0 5 10 15 clutch clutch airball airballs amazins aquamarine baaad backless badddd beaded bejeweled biiiiiig bodice comebacker crossbody diaw dingers dominant dookies drawstring dress durantula embellishments emeralds fadeaway foooo freethrow freethrows fringed futs game-tying game-winning gingham ginobili goat gown gwg gyorko handbag houndstooth inciarte isman iwakuma jankowski lay-up layup layups midrange monogrammed monstar mookie multicolored niiiiiiice ninth-inning no-look organza palkapinch-hit pleated posterizing quilted ridonkulous sequin step-back stepback tassel three-pointer three-run threes tweener two-handed unguardable vanvleet walkoff (b) Male space Figure 2: t-SNE visualization of top-50 neighbors from each corpus for word ‘clutch’, Gender split, with cyan for female and violet for male. 80 60 40 20 0 20 40 60 60 40 20 0 20 40 60 80 dam dam aright arrr ayyyeeee badd basin bruhhhhh canal causeway crazi cuhh damm dammm damn damnn damnnn damnnnn dams dang dangg dangggg defly drainage fashoo floatin gat hemi here ifeel jeeze jetty jkjk likeeee lmaoooooooo lmfaoooooooo mannnnn mannnnnn nall nawl ohhhhhhh okeechobee oohhh oooohh owwww refinery reservoir river shiii shiiii sholl trueee truu uhhhhhh watsup whre wordd woz wsuup wzup yea yeaaaaaaaa (a) Young space 80 60 40 20 0 20 40 60 80 100 50 0 50 100 150 dam dam aliso badd basin basins boyne caltrans canal causeway cauvery chickamauga coulee damm damn dams dang desal dike drainage dredge dredging estuary floodwater fracked gat hemi hydroelectric jeeze jetty klamath kosi levees mannnnn marikina mekong mnt nall narmada ohhhhhhh okeechobee oroville pcbs rcc reclamation refinery reservoir reservoirs river seawall sewerage skywalk spillway stormwater tappan tributary truckee uhhhhhh underpasses wastewater westconnex woy woz yamuna yea (b) Older space Figure 3: t-SNE visualization of top-50 neighbors from each corpus for word ‘dam’, Age split, with cyan for older and violet for young. ing with spearman correlation9 and with DCG. For DURel, AlignCos gets better results when measuring with spearman, but both methods are on par when using DCG. 7.4 Interpretation and Visualization We find that in many cases, it is not clear why the returned candidate words were chosen, and questions such as “why is the word ‘dam’ different across age groups?” often arise. The NN method lends itself to interpretation, by considering the top-10 neighbors, as shown in Table 2. We note that this interpretation approach is very reliable in our method, as we are guaranteed to gain insights about the usage change when looking at neighboring words, since most of the neighbors will be different for the identified words. While we can definitely attempt at looking at the NN also for the OP-based meth9Average Spearman score over model runs with different numbers of iterations, as done in (Schlechtweg et al., 2019). ods, there we are not guaranteed at all to even spot a difference between the neighbors: it may absolutely be the case that the identified word moved in the embedding space “together” with most of its neighbors. In this case, looking at the neighbors will provide no insight on the nature of this change. We observed this phenomenon in practice. Nonetheless, comparing flat word lists is hard, and 10 words are often insufficient. We present a visualization method that aids in understanding the model’s suggestions. The visualization consists of projecting the word of interest and its top-50 neighbors from each corpus into two dimensions using t-SNE (Maaten and Hinton, 2008), and plotting the result while coloring the neighbors in the intersection in one color and the neighbors unique to each corpus in other colors. We expect the neighbors of a word of interest to have distinct neighbors across the corpora. Figures 2 and 3 show the visualizations for the 546 word clutch in the Gender split, with cyan for female and violet for male, and the word dam in the Age split, with cyan for older and violet for young (in both cases they were no shared neighbours). We plot the projection of the words twice – one plot for each embedding space. We can see that, as expected, the neighboring words are distinct, and that the target word belongs to the respective neighborhood in each space. We conclude that this is a useful tool for interpreting the results of our model. 8 Related Work Extensive work has been done on detecting word usage change across corpora that predated the alignment-based methods (Mitra et al., 2014; Jatowt and Duh, 2014; Kenter et al., 2015; Ho et al., 2016; Frermann and Lapata, 2016). In addition, two works are more closely related to our approach. In Azarbonyad et al. (2017), the authors also use the neighbors of a word in order to determine its stability (and therefore, the extent to which it changes). Their best model combines the traditional alignment-based approach with weighting the neighbors according to their rank and their stability. The algorithm is iterative, and they update the stability of all the words in the vocabulary in each update step. Our method uses the neighbors of the words directly, does not include an iterative process, and does not rely on cosine-distance in the aligned embeddings. In addition, their method requires computation for the whole vocabulary, while other methods, including ours, usually allow querying for a single word. Another work that considers the neighbors of the word in order to determine the extent of change is that of Hamilton et al. (2016a), in which they suggest a measure that is based on the changes of similarities between the target word and its neighbors in both spaces. They find that this method is more suitable for identifying changes that are due to cultural factors, rather than linguistic shift. This may serve as another motivation to move from the global measures to a local one. Recent works (Giullianelli, 2019; Martinc et al., 2019) explored the possibility of modeling diachronic and usage change using contextualized embeddings extracted from now ubiquitous Bert representations (Devlin et al., 2019). Focusing on the financial domain, Montariol and Allauzen (2020) use, on top of Bert embeddings, a clustering method that does not need to predefine the number of clusters and which leads to interesting results on that domain. Another approach from Hu et al. (2019) relies on the inclusion of example-based word sense inventories over time from the Oxford dictionary to a Bert model. Doing so provides an efficient fine-grained word sense representation and enables a seemingly accurate way to monitor word sense change over time. Most of those approaches could be easily used with our method, the inclusion of contextualized embeddings would be for example straightforward, we leave it for future work. 9 Conclusion Detecting words that are used differently in different corpora is an important use-case in corpusbased research. We present a simple and effective method for this task, demonstrating its applicability in multiple different settings. We show that the method is considerably more stable than the popular alignment-based method popularized by Hamilton et al. (2016b), and requires less tuning and word filtering. We suggest researchers to adopt this method, and provide an accompanying software toolkit. Acknowledgments We thank Marianna Apidianiaki for her insightful comments on an earlier version of this work. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT), and from the the Israeli ministry of Science, Technology and Space through the Israeli-French Maimonide Cooperation programme. The second and third authors were partially funded by the French Research Agency projects ParSiTi (ANR-16-CE330021), SoSweet (ANR15-CE38-0011-01) and by the French Ministry of Industry and Ministry of Foreign Affairs via the PHC Maimonide FranceIsrael cooperation programme. References Jacob Levy Abitbol, M´arton Karsai, Jean-Philippe Magu´e, Jean-Pierre Chevrot, and Eric Fleury. 2018. Socioeconomic dependencies of linguistic patterns in twitter: A multivariate analysis. In Proceedings of the 2018 World Wide Web Conference, WWW ’18, pages 1125–1134. 547 Maria Antoniak and David Mimno. 2018. Evaluating the Stability of Embedding-based Word Similarities. Transactions of the Association for Computational Linguistics, 6:107–119. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 789–798, Melbourne, Australia. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised Neural Machine Translation. In 6th International Conference on Learning Representations, ICLR. Hosein Azarbonyad, Mostafa Dehghani, Kaspar Beelen, Alexandra Arkut, Maarten Marx, and Jaap Kamps. 2017. Words Are Malleable: Computing Semantic Shifts in Political and Media Discourse. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM ’17, pages 1509–1518. Robert Bamler and Stephan Mandt. 2017. Dynamic Word Embeddings. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 380–389, International Convention Centre, Sydney, Australia. PMLR. Mark Davies. 2015. Corpus of Historical American English (COHA). Marco Del Tredici, Raquel Fern´andez, and Gemma Boleda. 2019. Short-Term Meaning Shift: A Distributional Exploration. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2069–2075, Minneapolis, Minnesota. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Lea Frermann and Mirella Lapata. 2016. A Bayesian Model of Diachronic Meaning Change. Transactions of the Association for Computational Linguistics, 4(0). Mario Giullianelli. 2019. Lexical semantic change analysis with contextualised word representations. Master’s thesis, Institute for Logic, Language and Computation,, University of Amsterdam, July. William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016a. Cultural Shift or Linguistic Drift? Comparing Two Computational Measures of Semantic Change. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2116–2121, Austin, Texas. Association for Computational Linguistics. William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016b. Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1489–1501, Berlin, Germany. Association for Computational Linguistics. Anna H¨atty, Dominik Schlechtweg, and Sabine Schulte im Walde. 2019. SURel: A Gold Standard for Incorporating Meaning Shifts into Term Extraction. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 1–8, Minneapolis, Minnesota. Association for Computational Linguistics. Tin Kam Ho, Luis A. Lastras, and Oded Shmueli. 2016. Concept Evolution Modeling Using Semantic Vectors. In Proceedings of the 25th International Conference Companion on World Wide Web, WWW ’16 Companion, pages 45–46. International World Wide Web Conferences Steering Committee. Renfen Hu, Shen Li, and Shichen Liang. 2019. Diachronic sense modeling with deep contextualized word embeddings: An ecological view. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3899–3908, Florence, Italy. Association for Computational Linguistics. Adam Jatowt and Kevin Duh. 2014. A Framework for Analyzing Semantic Change of Words Across Time. In Proceedings of the 14th ACM/IEEE-CS Joint Conference on Digital Libraries, JCDL ’14, pages 229– 238. IEEE Press. Yova Kementchedjhieva, Sebastian Ruder, Ryan Cotterell, and Anders Søgaard. 2018. Generalizing Procrustes Analysis for Better Bilingual Dictionary Induction. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 211–220, Brussels, Belgium. Association for Computational Linguistics. Tom Kenter, Melvin Wevers, Pim Huijnen, and Maarten de Rijke. 2015. Ad Hoc Monitoring of Vocabulary Shifts over Time. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM ’15, pages 1191–1200. ACM. Yoon Kim, Yi-I Chiu, Kentaro Hanaki, Darshan Hegde, and Slav Petrov. 2014. Temporal Analysis of Language through Neural Language Models. In 548 Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, pages 61–65, Baltimore, MD, USA. Association for Computational Linguistics. Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2015. Statistically significant detection of linguistic change. In Proceedings of the 24th International Conference on World Wide Web, pages 625–635. International World Wide Web Conferences Steering Committee. Andrey Kutuzov, Lilja Øvrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embeddings and semantic shifts: a survey. In Proceedings of the 27th International Conference on Computational Linguistics, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In Proceedings of ICLR. Guillaume Lample, Alexis Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018b. Word translation without parallel data. In 6th International Conference on Learning Representations, ICLR. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing Data using t-SNE. Journal of machine learning research, 9(Nov):2579–2605. Matej Martinc, Petra Kralj Novak, and Senja Pollak. 2019. Leveraging contextual embeddings for detecting diachronic semantic shift. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv:1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. Sunny Mitra, Ritwik Mitra, Martin Riedl, Chris Biemann, Animesh Mukherjee, and Pawan Goyal. 2014. That’s sick dude!: Automatic identification of word sense change across different timescales. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1020–1029, Baltimore, Maryland. Association for Computational Linguistics. Syrielle Montariol and Alexandre Allauzen. 2020. E´tude des variations s´emantiques `a travers plusieurs dimensions. In Actes de la conf´erence conjointe JEP-TALN 2020, Nancy, France. Maja Rudolph and David Blei. 2018. Dynamic Embeddings for Language Evolution. In Proceedings of the 2018 World Wide Web Conference, WWW ’18, pages 1003–1011. Maja Rudolph, Francisco Ruiz, Susan Athey, and David Blei. 2017. Structured embedding models for grouped data. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, pages 250–260, USA. Curran Associates Inc. Dominik Schlechtweg, Anna H¨atty, Marco Del Tredici, and Sabine Schulte im Walde. 2019. A Wind of Change: Detecting and Evaluating Lexical Semantic Change across Times and Domains. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 732–746, Florence, Italy. Association for Computational Linguistics. Dominik Schlechtweg, Sabine Schulte im Walde, and Stefanie Eckmann. 2018. Diachronic Usage Relatedness (DURel): A Framework for the Annotation of Lexical Semantic Change. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 169–174, New Orleans, Louisiana. Association for Computational Linguistics. Peter H Sch¨onemann. 1966. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1–10. Milan Straka and Jana Strakov´a. 2017. Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 88–99, Vancouver, Canada. Association for Computational Linguistics. Nina Tahmasebi, Lars Borin, and Adam Jatowt. 2018. Survey of Computational Approaches to Lexical Semantic Change. CoRR, abs/1811.06278. Laura Wendlandt, Jonathan K. Kummerfeld, and Rada Mihalcea. 2018. Factors Influencing the Surprising Instability of Word Embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2092–2102, New Orleans, Louisiana. Association for Computational Linguistics. Matti Wiegmann, Benno Stein, and Martin Potthast. 2019. Celebrity Profiling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2611–2618, Florence, Italy. Association for Computational Linguistics. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized Word Embedding and Orthogonal Transform for Bilingual Word Translation. In Proceedings of the 2015 Conference of the North 549 American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006–1011, Denver, Colorado. Association for Computational Linguistics. Jaewon Yang and Jure Leskovec. 2011. Patterns of Temporal Variation in Online Media. In Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, WSDM ’11, pages 177–186, New York, NY, USA. ACM. Zijun Yao, Yifan Sun, Weicong Ding, Nikhil Rao, and Hui Xiong. 2018. Dynamic Word Embeddings for Evolving Semantic Discovery. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM ’18, pages 673– 681. ACM. 550 A Implementation Details Tokenization We tokenize the English, French and Hebrew tweets using ark-twokenize-py10, Moses tokenizer11 and UDPipe (Straka and Strakov´a, 2017), respectively. We lowercase all the tweets and remove hashtags, mentions, retweets and URLs. We replace all the occurrences of numbers with a special token. We discard all words that do not contain one of the following: (1) a character from the respective language; (2) one of these punctuations: “-”, “’”, “.”; (3) emoji. Word embeddings We construct the word representations by using the continuous skip-gram negative sampling model from Word2vec (Mikolov et al., 2013a,b). We use the Gensim12 implementation. For all our experiments, we set vector dimension to 300, window size to 4, and minimum number of occurrences of a word to 20. The rest of the hyperparameters are set to their default value. For the stability experiments we run the embedding algorithm twice, each time with a different random seed. B Qualitative Evaluation: Detected Words We show the top-10 words our method yields for each of the different splits, accompanied with the nearest neighbors in each corpus (excluding words in the intersection), to better understand the context. For comparison, we also show the top-10 words according to the AlignCos method. The splits are the following: English: 1900 vs. 1990 The list of top-10 detected words from our method (NN) vs. AlignCos method, for corpus split according to the year of the English text is displayed in Table 4. Age: Young vs. Older The list of top-10 detected words from our method (NN) vs. AlignCos method, for corpus split according to the age of the tweet-author is displayed in Section 7. Interesting words found at the top-10 list are the following (young vs. older): dem (‘them’ vs. US political party), dam (‘damn’ vs. water barrier), assist (football contribution vs. help). In addition, interesting 10https://github.com/myleott/ ark-twokenize-py 11https://www.nltk.org/_modules/nltk/ tokenize/moses.html 12https://radimrehurek.com/gensim/ models/word2vec.html words that came up in the top-30 list are the following: pc (personal computer vs. Canadian party), presents (introduces vs. gifts), wing (general vs. political meaning), prime (general vs. political meaning), lab (school vs. professional). Gender: Male vs. Female The list of top-10 detected words from our method (NN) vs. AlignCos method, for corpus split according to the gender of the tweet-author is displayed in Table 5. Interesting words found at the top-10 list are the following (male vs. female): clutch (grasping vs. female bag), bra (colloquial usage like ‘bro’ vs. female clothing), gp (grand prix event vs. general practitioner). In addition, interesting words that came up in the top-40 list are the following: stat (statistics vs. right away), pit (car-related vs. dog-related), dash (radio station vs. quantity), pearl (pearl harbor vs. gemstone and color). Occupation: Performer vs. Sports The list of top-10 detected words from our method (NN) vs. AlignCos method, for corpus split according to the occupation (Performer vs. Sports) of the tweetauthor is displayed in Table 6. Occupation: Creator vs. Sports The list of top10 detected words from our method (NN) vs. AlignCos method, for corpus split according to the occupation (Creator vs. Sports) of the tweet-author is displayed in Table 7. Interesting words found at the top-10 list are the following (creator vs. sports): cc (carbon copy vs. country club), op (event opening vs. operation), wing (politics vs. football player position), worlds (earth vs. world cup). In addition, interesting words that came up in the top-20 list are the following: oval (oval office vs. sports ground), fantasy (genre vs. fantasy football), striking (shocking vs. salient), chilling (frightening vs. relaxing), fury (book: fire and fury vs. British boxer). Occupation: Creator vs. Performer The list of top-10 detected words from our method (NN) vs. AlignCos method, for corpus split according to the occupation (Creator vs. Performer) of the tweetauthor is displayed in Table 8. Interesting words found at the top-10 list are the following (creator vs. performer): dash (travel vs. person), presents (introduces vs. gifts), chapter (book vs. movie). In addition, interesting words that came up in the top-30 list are the following: cartoon (cartoonist vs. movie), scoop (news story vs. ice cream), 551 mega (money vs. largeness), sessions (assembly vs. period). Time of week: Weekday vs. Weekend The list of top-10 detected words from our method (NN) vs. AlignCos method, for corpus split according to the time of week (Weekday vs. Weekend) of the tweet is displayed in Table 9. Interesting words found at the top-10 list are the following (weekday vs. weekend): cc (credit card vs. carbon copy), pitch (presentation attribute vs. playing surface), bond (agreement vs. movie character). In addition, interesting words that came up in the top-30 list are the following: sunday (day of the week vs. vacation-related), vp (vice president vs. tv-series: True Jackson, VP), third (report-related vs. sportsrelated), cliff (first name vs. mountain cliff), fight (general meaning vs. boxing). French: 2014 vs. 2018 The list of top-10 detected words from our method (NN) vs. AlignCos method, for corpus split according to the year of the French text is displayed in Table 10. Interesting words found at the top-10 list are the following (2014 vs. 2018): ia (frequent misspelled contraction of “ya” in 2014, vernacular form of “il y a”, there is, vs. “intelligence artificielle”, artificial intelligence), divergent (the movie vs. the adjective). In addition, interesting words that came up in the top-30 list are the following: pls (contraction of the borrowing “please” vs. the acronym of “Position lat´erale de s´ecurit´e”, lateral safety position, which is now used as a figurative synonym for “having a stroke”. In the same vein, and tied to political debates, we note apl (contraction of “appel/appeler”, call/to call vs. controversial housing subsidies). Hebrew: 2014 vs. 2018 The list of top-10 detected words from our method (NN) vs. AlignCos method, for corpus split according to the year of the Hebrew text is displayed in Figure 4. Interesting words found at the top-10 list (2014 vs. 2018) are the following (we use transliteration accompanied with a literal translation to English): beelohim–in god (pledge word vs. religion-related) and Kim– Kim (First name vs. Kim Jong-un). In addition, interesting words that came up in the top-30 list are the following: shtifat–washing (plumbing vs. brainwashing), miklat–shelter (building vs. asylum (for refugees)), borot–pit/ignorance (plural of pit vs. ignorance). 552 English (1900 vs. 1990) NN neighbors in each corpus gay cheery, humoured, apparel, natured, dresses, attire, neat, bright, genial, unusually lesbian, transgender, lesbians, katz, bisexual, bisexuals, coalition, gays, bi, gras van wyk, commented, sterne, skipper, south, simon, defarge, ned, island, carolina truck, helsing, luyden, luydens, pickup, toyota, jeep, porsche, volvo, der press pressed, publisher, papers, issues, dublin, circulation, thickest, wilson, paper, payment ams, belknap, harvester, wesleyan, newberry, westview, middletown, esp, harrington, gainesville oxford durham, albany, lincoln, sometime, ireland, john, canon, christ, bishops, newcastle clarendon, basingstoke, supervising, blackwell, 1921, researching, database, ibadan, walton, peruse major curtly, osborne, gordon, retorted, dryly, inspector, steele, chester, stewart, morris brigadier, factor, dramatist, producers, andre, schomburg, boswell, brian, biggest, insignia 2 vide, woodcuts, illustrations, peggy, demy, cloister, portrait, memoirs, baroness, allen rte, 767, tn, dresden, vols, 38225, bp, klingon, 1863, 98765432 cambridge dublin, queens, glasgow, tutor, jesus, newcastle, christ, assistant, student, kent belknap, blackwell, 1921, persephone, harvester, hogarth, clarendon, ams, vols, esp 1 ornamental, woodcuts, dad, biography, section, demy, cent, 8vo, t, 3s xlibris, deduct, freepost, 345, 1001, 98765432, 350, 888, toulouse, bunkyo new revised, comer, institute, commonwealth, comers, development, insurance, illustrated, testament, magazine ungar, picayune, schocken, ams, crowell, atheneum, upstate, 10012, praeger, harrington check restrain, effort, balance, exertion, strove, readiness, restrained, gave, jerk, held cashier, update, checkbook, checks, payable, money, certificate, postal, brochure, lor AlignCos Top-10 wanting, gay, check, starting, major, actually, touching, harry, headed, romance Table 4: Top-10 detected words from our method (NN) vs. AlignCos method (last row), for corpus split according to the year of the text. Each word from our method is accompanied by its top-10 neighbors in each of the two corpora (1900 vs. 1990). Gender (male vs. female) bra bruh, brah, bro, cuh, homie, boi, cuzzo, dawg, breh, brudda thong, jeans, strapless, leggings, tights, underwear, skirt, pants, sneakers, shorts clutch threes, walkoff, mookie, dingers, layups, midrange, game-winning, diaw, gwg, layup sequin, beaded, gown, dress, handbag, chiffon, headpiece, tote, sandal, swarovski mm cores, thickness, oled, diameter, deg, usb-c, ssd, dbo, gpu, cpu , huh, arizona, that’s, errrr, bcz, thts, cc, , mc armand, dilla, rza, kuntryking, rapper, boney, riz, donald’s, huss, dizzee obe, showstopper, groupie, fleming, thnks, hoff, cohost, honoree, harmon, reece gp motogp, thruxton, monza, indycar, dtm, snetterton, suzuka, hockenheim, criterium, wec physicians, pharmacists, clinical, procurement, ndis, insurers, nbn, tfl, hep, mh keeper midfielder, cech, krul, benteke, free-kick, freekick, aguero, defoe, benzema, goalscorer dynamo, goofball, hero, hustler, touche, stud, digger, nemesis, saver, ruler nd tht, iu, wvu, gtown, isu, wisco, ou, gng, huggs, byu minot, nh, ky, hoosier, farmers, heitkamp, ranchers, dakota, rural, ndans hay bales, doon, beech, hinton, blackwood, noches, ayer, mong, dartford, rooty beccy, goat, mclaren, portage, ale, glasto, grafton, daffodils, cornish, crap steph lebron, kyrie, klay, harden, draymond, rondo, melo, delly, dwade, korver chels, rach, leah, sam, liz, dani, trish, lovie, cait, kel echo homepod, orc, cortana, npc, oculus, undead, redstone, forked, emergent, echoed paradiso, avalon, asbury, hyde, sondheim, colosseum, oasis, , empress, inconvenient AlignCos Top-10 bra, mm, todd, bonnie, ralph, casey, stacey, gordon, lou, dana Table 5: Top-10 detected words from our method (NN) vs. AlignCos method (last row), for corpus split according to the gender of the tweet-author. Each word from our method is accompanied by its top-10 neighbors in each of the two corpora (Male vs. Female). 553 Occupation (performer vs. sports) NN neighbors in each corpus blues funk, reggae, b.b., boneshakers, folk, bluegrass, grooves, rhythm, trippers, moody hawks, leafs, rangers, sabres, bruins, tahs, fulham, knights, yotes, maroons cc , , , , , , , , , lol sabathia, montclair, firestone, bethpage, isleworth, tourn, dorado, quail, riviera, westchester dub anime, subtitles, dubbing, dubbed, dlc, boxset, badman, rmx, miku, trax lakeshow, crunk, yessir, w, yessirr, ayeeee, win, ayeeeee, yessirrr, yesir bra thong, panty, headband, panties, spanx, jeans, corset, uggs, tights, blouse bro, cuh, brodie, boi, dawg, brahh, breh, broo, cuzz, cuzo track rmx, tunes, album’s, , trax, single, sampler, instrumental, unreleased, song’s racetrack, racin, field, slicks, velodrome, circuit, mtb, race, racing, sandown wing extremist, liberal, right-wing, fascists, leftist, conservative, propaganda, extremism, extremists, nationalists flank, footed, rear, wingers, fullback, retake, netting, seat, midfield, fullbacks par ghar, nahin, dekhna, mujhe, rahe, kiya, apne, naam, aaj, theek pars, bogey, birdie, holes, putts, hole, putted, fairway, birdied, sawgrass mo starlite, reeds, knuckleheads, bossier, rocke, kcmo, stafford, granada, hutchinson, rosemont bamba, tash, , wesley, kev, mane, yessssssssss, wes, yessssir, muzzy ace sweeeeet, fantastic, amazeballs, rad, amaaaazing, exceptional, sweeeet, jez, amazing-, hoot sickkk, jb, robin, angel, stoner, ostrich, ayeeeee, milly, homey, hustler duo supergroup, violinist, troupe, cardenas, stylings, cellist, baritone, multi-talented, vocalist, bassist tandem, northgate, dominant, keanu, hooker, wingers, rebounder, squads, superstar, jada AlignCos Top-10 spencer, reed, dub, kurt, jerry, kirk, nova, watson, wa, curtis Table 6: Top-10 detected words from our method (NN) vs. AlignCos method (last row), for corpus split according to the occupation of the tweet-author. Each word from our method is accompanied by its top-10 neighbors in each of the two corpora (performer vs. sports). Occupation (creator vs. sports) NN neighbors in each corpus cc , , , , , xo-mk, rt, , , montclair, firestone, bethpage, isleworth, tourn, dorado, quail, riviera, westchester, vero op nel, reeva, roux, hoare, pathologist, shauna, baden-clay, ed, nedrow, barrister reconstruction, achilles, knee, ruptured, recovering, acl, surgeon, meniscus, tendon, injury blues reggae, bluegrass, fillmore, rhythm, rockers, ellington, grooves, techno, dnb, hob hawks, leafs, sabres, bruins, tahs, fulham, yotes, rovers, gunners, maroons origin ethnicity, ancestry, identity, significance, mythology, identification, protagonists, lineage, lore, retelling nrl, afl, maroons, qld, footy, ashes, wallabies, a-league, premiership, roosters wing right-wing, far-left, faction, left-wing, zionist, reactionary, globalist, conservative, extremist, liberal flank, footed, fullback, retake, netting, seat, midfield, fullbacks, guard, mozzarella weigh meddle, defer, invest, bathe, reassure, implicated, experts, ponder, expel, summarize weigh-in, weigh-ins, ins, sparring, pre-fight, ufc, bellator, strikeforce, spar, ufcfightpass worlds universes, history’s, colliding, realms, planets, universe, eras, modes, franchises, environments europeans, olympics, worldcup, commonwealths, wc, commonwealth, championships, european, cwg, paralympics sessions comey, rosenstein, recusal, mcgahn, mccabe, recused, recuse, mueller, doj, dhs practices, sess, circuits, drills, weights, interval, camps, trainings, training, workout track rmx, compilation, reloaded, hexagon, soundcloud, ep, dnb, bandsintown, tunes, rework racetrack, racin, sx, field, slicks, velodrome, circuit, mtb, race, racing presents luts, voyager, housecall, ottaviani, uploaded, balearic, inharmony, derringer, machel, schulz pressies, pressie, advent, decorating, cupcakes, toys, x-mas, sweets, certificates, handmade AlignCos Top-10 lawrence, marc, morris, op, diamond, carter, dash, cont, bee, norman Table 7: Top-10 detected words from our method (NN) vs. AlignCos method (last row), for corpus split according to the occupation of the tweet-author. Each word from our method is accompanied by its top-10 neighbors in each of the two corpora (creator vs. sports). 554 Occupation (creator vs. performer) NN neighbors in each corpus echo distortion, echoing, google’s, lcd, ibooks, vibe, voice, songbook, audience, roku griffith, park, regents, acjokes, crest, roxy, paramount, trippers, folly, petco inc kopel’s, acquires, takeover, selects, async, -short, sony, invests, blaqstarr, tata aimless, caa, phonte, psi, edu, morillo, fuentes, omega, intl, int’l cont rec, thru, mang, recs, mi, ul, sr, bsm, ing, tm thku, rt, oth, btw-, muah, 0) , vry, twd, rt, wnt presents luts, voyager, housecall, ottaviani, uploaded, balearic, inharmony, derringer, machel, schulz morillo, erick, bash, pressies, whalum, pressie, winans, pawty, productions, torry rebel kurdish, libyan, jihadist, factions, sunni, jihadi, militant, hamas, daesh, isis ruler, rocker, geek, muse, whore, nerd, madonna’s, daydream, gangster, hippie buck manziel, clayton, jerry, wiley, cowboys, romo, ambrose, flacco, kidd, mavs bucky, cocker, paperboy, rickie, hefner, mcdowell, roddy, cy, farmer, leadoff thee salute, paraphrase, bishop, esv, browning, faulkner, lia, medina, kaysha, atwood shalt, thyself, merciful, ephesians, hahahahahahah, thine, philippians, yesssssss, throne, humbly chapter prologue, prc, outlining, novella, pages, scene, heartstopper, cebu, tome, outline bl, tblst, sdmf, doom, grimmest, warhammer, quilt, draculas, dario, crusade dash jnr, peppermint, flashes, wop, keef, cappuccino, scotty, hummus, lily, disco skeetv, skee, radio, hbr, snip, twirl, , blip, iheart, krispy op hoare, pathologist, shauna, baden-clay, nedrow, barrister, arguedas, protestor, bourque, arias urgentdogsofmiami’s, doreenvirtue’s, dermatologist, examination, surgeon, physio, intv, ons, nasal, doctor AlignCos Top-10 vince, todd, dana, watson, norman, marc, jerry, rs, mitch, brooks Table 8: Top-10 detected words from our method (NN) vs. AlignCos method (last row), for corpus split according to the occupation of the tweet-author. Each word from our method is accompanied by its top-10 neighbors in each of the two corpora (creator vs. performer). Time of week (weekday vs. weekend) NN neighbors in each corpus trick sudoku, sneaky, summat, moonwalk, frighten, rubik’s, clicker, smthng, stunt, foam treaters, treater, tricker, trick-or-treating, trick-or-treat, treats, or, neices, trick-or-treaters, kids cc citibank, debit, wachovia, credit, barter, visa, waived, payment, pkg, expedia snyder, ecu, rivera, mvc, yankees, clinches, natl, lin, ul, rk ups dhl, upping, situps, gowalla, shipment, shipments, fy, lunges, webos, sit-ups budgets, tractor, full-time, dri, radioshack, quik, distribution, fro, cheeseburgers, soulja recall recalling, maclaren, stork, cribs, strollers, defective, pedals, tundra, toyota, manufacturer fancy, specify, attribute, recommend, resist, adjust, vary, fwiw, grieve, refrain rush stampede, queues, detour, stretch, layover, standstill, congestion, levin, oncoming, braving refusal, jerry, pass, cbs, sellout, sideline, dover, interference, onside, tuscaloosa bond etfs, bernanke, insurer, sentencing, trustee, r.i., deficits, rba, hig, funds labor, humphrey, clarke, srk, titanic, fireman, colonel, fx, barney, jessie pitch bullpen, clinch, utley, win-win, lidge, interviewed, series, signage, stun, teleconference midfield, half-time, werth, tsn, offside, scoreless, roughing, punts, goal, rockies lloyd marv, asher, peter, andre, payton, phillip, bennett, o’connor, neal, wright llyod, jeward, mcelderry, lloyd’s, ollie, stace, danyl’s, jedwards, afro, olly’s zone faction, wasteland, emp, vibin, i.e, l.a., constraints, realms, xtreme, jammin endzone, redzone, fumbled, fumbles, interceptions, touchdown, interference, bounds, interception, romo ref salary, overturn, statewide, applicants, amendments, position, ordinance, commissioning, nsw, anc offside, capello, burley, mangini, play-off, officiating, roughing, rooney, interference, fumbled AlignCos Top-10 maine, evan, griffin, terry, sp, aaron, ken, harris, todd, li Table 9: Top-10 detected words from our method (NN) vs. AlignCos method (last row), for corpus split according to the time of week of the tweet. Each word from our method is accompanied by its top-10 neighbors in each of the two corpora (weekday vs. weekend). 555 French (2014 vs. 2018) NN neighbors in each corpus malcom charmed, futurama, desperate, housewives, housewifes, simpson, hunter, ferb, smallville, scott dembele, coutinho, mariano, paulinho, rafinha, diakhaby, demb´el´e, dembel´e, dembouz, rakitic rn en, eb, zn, en., enn, bored, bloquee, same, omfgg, stm fn, rn., dlf, fn., lfi, fhaine, lr, ex-fn, lrem, pcf boe bne, bnne, bonne, binne, bonnne, boonne, bone, bnn, bonnee, booonne peiffer, fourcade, svendsen, makarainen, schempp, desthieux, guigonnat, kuzmina, dahlmeier, tarjei mina kenza, ibtissem, bety, ghada, lina, laith, bzf, liya, ana, salom yerry, yerri, paulinho, gomes, mina., alcacer, rakitic, rafinha, dembele, coutinho smet smettre, smette, tmet, met, spose, senjaille, stappe, smettent, sdonne, samuse hallyday, laeticia, laura, læticia, vartan, halliday, hallyday., johnny, boudou, laetitia lr bdx, dk, poitiers, bx, rouen, caen, amiens, malo, perpi, aix lr., lrem, dlf, lfi, fn, ump, r´epublicains, udi, vb, rn divergent tmr, tfios, thg, catching, hunger, mockingjay, fsog, insurgent, allegiant, tobias divergent., diverge, divergentes, diff`erent, convergent, diverger, diam´etralement, concordent, oppos´ees., divergences ia ya, y’, yaura, quya, yavai, yaver, yora, yavait, yia, jconai artificielle, intelligenceartificielle, i.a, ia., intelligence, iot, i.a., artificielle., chatbots, automatisation jdr jdrr, hablais, duele, pfpfpfpfpf, eso, igual, nadie, d´ejame, pensar, pelis jdr., warhammer, shadowrun, roleplay, pathfinder, shmup, fangame, dungeon, rp, webcomic cs csst, ceest, enpls, wch, tst, cetei, wcch, c, ctei, cetai csgo, rl, pubg, fornite, fortnite, battlerite, faceit, ow, cod, dota AlignCos Top-10 -l, malcom, maximilien, dna, lr, mina, boe, dias, sierra, giuseppe Table 10: Top-10 detected words from our method (NN) vs. AlignCos method (last row), for corpus split according to the year of the text. Each word from our method is accompanied by its top-10 neighbors in each of the two corpora (2014 vs. 2018). Figure 4: Top-10 detected words from our method (NN) vs. AlignCos method (last row), for corpus split according to the year of the text. Each word from our method is accompanied by its top-10 neighbors in each of the two corpora (2014 vs. 2018).
2020
51
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5759–5771 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5759 Cross-Lingual Unsupervised Sentiment Classification with Multi-View Transfer Learning Hongliang Fei, Ping Li Cognitive Computing Lab Baidu Research 1195 Bordeaux Dr, Sunnyvale, CA 94089, USA 10900 NE 8th St, Bellevue, WA 98004, USA {hongliangfei,liping11}@baidu.com Abstract Recent neural network models have achieved impressive performance on sentiment classification in English as well as other languages. Their success heavily depends on the availability of a large amount of labeled data or parallel corpus. In this paper, we investigate an extreme scenario of cross-lingual sentiment classification, in which the low-resource language does not have any labels or parallel corpus. We propose an unsupervised cross-lingual sentiment classification model named multi-view encoder-classifier (MVEC) that leverages an unsupervised machine translation (UMT) system and a language discriminator. Unlike previous language model (LM) based fine-tuning approaches that adjust parameters solely based on the classification error on training data, we employ the encoder-decoder framework of a UMT as a regularization component on the shared network parameters. In particular, the cross-lingual encoder of our model learns a shared representation, which is effective for both reconstructing input sentences of two languages and generating more representative views from the input for classification. Extensive experiments on five language pairs verify that our model significantly outperforms other models for 8/11 sentiment classification tasks. 1 Introduction Recent neural network models have achieved remarkable performance on sentiment classification in English and other languages (Conneau et al., 2017; Chen et al., 2018; He et al., 2019; Chen and Qian, 2019). However, their success heavily depends on the availability of a large amount of labeled data or parallel corpus. In reality, some low-resource languages or applications have limited labeled data or even without any labels or parallel corpus, which may hinder us from training a robust and accurate sentiment classifier. To build sentiment classification models for low-resource languages, recent researchers developed cross-lingual text classification (CLTC) models (Xu and Yang, 2017; Eriguchi et al., 2018), which transfers knowledge from a resource-rich (source) language to a low-resource (target) language. The core of those models is to learn a shared language-invariant feature space that is indicative of classification for both languages. Therefore a model trained from the source language can be applied to the target language. Based on how the shared feature space is learned, there are three categories, namely word-level alignments (Andrade et al., 2015), sentence-level alignments (Eriguchi et al., 2018) and document level alignments (Zhou et al., 2016). Those models can well capture the semantic similarity between two languages. They, however, require parallel resources such as a bilingual dictionary, parallel sentences, and parallel Wikipedia articles. Such a limitation may prevent these models from being applicable in languages without any parallel resources. Recently, there have been several attempts at developing “zero-resource” models (Ziser and Reichart, 2018; Chen et al., 2018; Chen and Qian, 2019). Most notably, Ziser and Reichart (2018) proposed a cross-lingual & cross-domain (CLCD) model that builds on pivot based learning and bilingual word embedding. Although CLCD does not directly need labeled data or parallel corpus, it requires bilingual word embeddings (BWEs) (Smith et al., 2017) that requires thousands of translated words as a supervised signal. Chen et al. (2018) developed an adversarial deep averaging network to learn latent sentence representations for classification, but it had an implicit dependency on BWEs (Zou et al., 2013) that requires pretraining on a large bilingual parallel corpus. Chen and Qian (2019) extended the 5760 cross-lingual model in Chen et al. (2018) to multiple source languages by using the unsupervised BWEs (Lample et al., 2018b) and adding individual feature extractor for each source language, which eliminated the dependency on a parallel corpus. Nevertheless, their model is very sensitive to the quality of BWEs and performs poorly on distant language pairs such as English-Japanese, as illustrated in their experimental study. In parallel, cross-lingual language models (LMs) trained from raw Wikipedia texts, such as multilingual BERT1 (Devlin et al., 2019) and XLM (Conneau and Lample, 2019), have been prevalent in solving zero-shot classification problems (Wu and Dredze, 2019). Those models use the BERT-style Transformer (Vaswani et al., 2017) architecture simultaneously trained from multiple languages to construct a sentence encoder, and fine-tune the encoder and a classifier on labeled training data from the source language. Then the fine-tuned model is applied to the target language. The whole process does not require any labeled data or parallel corpus. However, under the “zero parallel resource” setting, the encoder trained from self-supervised masked language modelling within each language may not well capture the semantic similarity among languages, which could harm the generalization performance of fine-tuned models. In this paper, we propose a sentiment classification model called multi-view encoder-classifier (MVEC) in an unsupervised setting, in which we only have monolingual corpora from two languages and labels in the source language. Different from previous language model (LM) based fine-tuning approaches (Devlin et al., 2019; Conneau and Lample, 2019) that adjust parameters solely based on the classification error of training data, we utilize the encoder-decoder network from unsupervised machine translation (UMT) (Lample et al., 2018a) to regularize and refine the shared latent space. In particular, the transformer-based encoder regularized by a language discriminator learns shared but more refined language-invariant representations, which are effective for both reconstructing sentences from two languages by the decoder and generating multi-view feature representations for classification from input documents. In our model, we construct two views from the en1https://github.com/google-research/ BERT/blob/master/multilingual.md coder: (i) the encoded sentences in the source language; (ii) the encoded translations of the source sentences in the target language. Our proposed MVEC is partially initialized by pretrained LMs (Conneau and Lample, 2019) but further fine-tuned to align sentences from two languages better, accurately predict labeled data in the source language and encourage consensus between the predictions from the two views. The full model is trained in an end-to-end manner to update parameters for the encoder-decoder, the language discriminator, and the classifier at each iteration. Our contributions in this paper are as follows: • We present an unsupervised sentiment classification model without any labels or parallel resource requirements for the target language. By designing a multi-view classifier and integrating it with pretrained LMs and UMT (Lample et al., 2018a), we build our model (MVEC) on a more refined latent space that is robust to language shift with better model interpretation compared to previous zero-shot classification works (Chen et al., 2018; Conneau and Lample, 2019). • We extensively evaluate our model in 5 language pairs involving 11 sentiment classification tasks. Our full model outperforms state-ofthe-art unsupervised fine-tuning approaches and partially supervised approaches using crosslingual resources in 8/11 tasks. Therefore, our results provide a strong lower bound performance on what future semi-supervised or supervised approaches are expected to produce. 2 Related Work 2.1 Cross-Lingual Text Classification (CLTC) CLTC aims to learn a universal classifier that can be applied to languages with limited labeled data (Bel et al., 2003; Dong and de Melo, 2019; Keung et al., 2019), which is naturally applicable for sentiment analysis. Traditional supervised methods utilize cross-lingual tools such as machine translation systems and train a classifier on the source language (Prettenhofer and Stein, 2010). The latest models used parallel corpus either to learn a bilingual document representation (Zhou et al., 2016) or to conduct cross-lingual model distillation (Xu and Yang, 2017). In the unsupervised setting, Chen et al. (2018) learned language-invariant latent cross-lingual representations with adversarial training. Ziser 5761 and Reichart (2018) used pivot based learning and structure-aware DNN to transfer knowledge to low-resourced languages. In both papers, however, they have an implicit dependency on BWEs, which requires a bilingual dictionary to train. Chen and Qian (2019) was the first fully unsupervised approach using the unsupervised BWEs (Lample et al., 2018b) and multi-source languages with adversarial training. In contrast, our model is a multi-view classification model that is seamlessly integrated pretrained LMs (Conneau and Lample, 2019) and the encoder-decoder from UMT (Lample et al., 2018a) with adversarial training. Hence we learn a more fine-tuned latent space to better capture document-level semantics and generate multiple views to represent the input. 2.2 Unsupervised Machine Translation UMT does not rely on any parallel corpus to perform translation, which lays a foundation for our approach. At the word-level, Lample et al. (2018b) built a bilingual dictionary between two languages by aligning monolingual word embeddings in an unsupervised way. At the sentence and document level, Lample et al. (2018a) proposed a UMT model by learning an autoencoder that can reconstruct two languages under both within-domain and cross-domain settings. Lample et al. (2018c) extended Lample et al. (2018a) with a phrase-based approach. Since we aim to learn more refined language-invariant representations for classification, it is natural to employ the encoder from a UMT system to generate multiple views of the input and enable knowledge transfer. 2.3 Multi-View Transfer Learning The task of multi-view transfer learning is to simultaneously learn multiple representations and transfer the learned knowledge from source domains to target domains, which have fewer training samples. Generally, data from different views contains complementary information and multiview learning exploits the consistency from multiple views (Li et al., 2019). Our work is particularly inspired by Fu et al. (2015) and Zhang et al. (2019), both of which exploit the complementarity of multiple semantic representations with semantic space alignment. The difference is that we use an encoder-decoder framework to generate multiple views for input from the source language and enforce a consensus between their predictions. Furthermore, we introduce a language discriminator (Lample et al., 2018a) to encourage the encoder to generate language-invariant representations from the input. 3 Methodology In this section, we will introduce our model’s general workflow, including the details of each component and our training algorithm. 3.1 Problem Setup Given monolingual text data {Dsrc, Dtgt} from both the source and target language with a subset of labeled samples {DL src, yL src} in the source language where yL src is a vector of class labels and DL src ⊂Dsrc, the task aims to build a universal classification model f(X; θ) →y parameterized by θ that can be directly applicable to unlabeled data in the target language, where X is an input document from any language and y is its class label. Note that in this paper we assume two languages share the same class types. 3.2 Model Architecture Our proposed approach multi-view encoder classifier (MVEC) is composed of three components: an encoder-decoder, a language discriminator, and a classifier. Motivated by the success of unsupervised machine translation (UMT) in Lample et al. (2018a) and reconstruction regularization by an autoencoder in Sabour et al. (2017), we adopt the encoder-decoder framework from UMT (Lample et al., 2018a) and introduce self-reconstruction loss within one language and back-translation reconstruction loss across languages together with the normal loss from classification. For simplicity, we denote self-reconstruction loss as “withindomain loss” and back-translation reconstruction loss as “cross-domain loss” throughout the paper. Although the encoder from UMT can generate a latent representation for input sentences/documents, there is still a semantic gap between the source and target language. Following Lample et al. (2018a); Chen et al. (2018), we enrich the encoder-decoder framework with a language discriminator that can produce fine-tuned latent representations to align latent representations from two languages better. Such representations are necessary to train a language-invariant classifier that is robust to the shift in languages. In particular, as illustrated in Figure 1, the encoder is used to encode source and target docu5762 Text classifier LC Language discriminator Encoder adv Decoder Decoder Source language Target language Encoder cross within domain domain within domain adv training training Text label Ladv Lwd_src Lwd_tgt Lcd_src Lcd_tgt LD Figure 1: Multi-view encoder classifier (MVEC) architecture. Blue (red) lines indicate the message flow within the source/target language (across languages), respectively. Green lines indicate the message flow from the encoder to the text classifier. The encoder and decoder share the same parameters. ments (a sequence of sentences) into a shared latent space, while the decoder is responsible for decoding the documents from the latent space to the source or the target language. Following Lample et al. (2018a), the encoder-decoder is shared for both languages (domains) and trained withindomain and cross-domain. The language discriminator aims to predict the language source for each document, and the classifier is trained to classify each document into predefined class labels. Under the unsupervised setting, MVEC only observes unlabeled monolingual corpora from two languages and some labeled documents in the source language. The unlabeled monolingual data is normally sampled from the application domain, i.e., unlabeled product reviews or social media posts, which is used in both adopting pretrained LMs in the target domain and training UMT. As shown in Figure 1, unlabeled source and target data only pass through encoder-decoder and language discriminator, while the labeled source data pass all components in the system, including the sentiment classifier. For evaluation purposes, we may have labeled documents in the target language. However, they are only used during the test period. In the following subsections, we introduce each component of MVEC in detail. 3.3 Encoder-Decoder Let x(l) = (x(l) 1 , x(l) 2 , · · · , x(l) n ) denote the input document of n words from a particular language l, where l ∈{src, tgt}. The encoder is a neural network eθenc(x(l)) parameterized by θenc that produces a sequence of n hidden states Z(l) = (z(l) 1 , z(l) 2 , · · · , z(l) n ) by using the corresponding word embedding for x(l) i , where z(l) i is the latent representation of x(l) i in the shared latent space and θenc are parameters of the encoder shared between two languages. The encoder could be a BiLSTM or a transformer (Vaswani et al., 2017). In this paper, we adopt the transformer, which has achieved enormous success in (e.g.,) recent text representation learning tasks (Devlin et al., 2019; Conneau and Lample, 2019). Given Z(l) as the input, the decoder dθdec(Z(l)) generates the output sequence y(l) = (y(l) 1 , y(l) 2 , · · · , y(l) k ). We use the same transformer based decoder as in Conneau and Lample (2019), parameterized by θdec. For simplicity, we will denote the encoder and decoder by e(x(l)) and d(Z(l)) respectively instead of eθenc(x(l)) and dθdec(Z(l)). It is more likely for the encoder-decoder to merely memorize every input word one by one if there are no imposed constraints. To improve the robustness of encoder-decoder, we follow Lample et al. (2018a) to adopt the Denoising Autoencoders (DAE) (Vincent et al., 2008), which recovers input from its corrupted version. There are three ways to inject noise into the document including shuffle, dropout, and replacement by special words. In our model, we drop and replace every word with probabilities of pd and pb, respectively, and we slightly shuffle the input document by implementing random permutation σ on the input document, where pd and pb can be viewed as hyper-parameters for controlling noise levels. In our design, the permutation σ satisfies the condition |σ(i) −i| ≤k, ∀i ∈{1, · · · , n}, where n is the length of input document and k is another hyper-parameter. Note that the noise model is only applied to unlabeled data used for training the encoder-decoder and the discriminator, while labeled data will keep its originality for all components training. We use G(.) to denote a stochastic noise model, which takes input document x(l) and generates G(x(l)) as a randomly sampled noisy version of x(l). To incorporate the encoder-decoder as regularization components, we follow Lample et al. (2018a) to consider both within-domain and crossdomain objective functions. The first objective 5763 function aims to reconstruct a document from a noisy version of itself within a language, whereas the second (cross-domain) objective function targets to teach the model to translate an input document across languages. Specifically, given a language l ∈{src, tgt}, the within-domain objective function can be written as: Rwd(θed, l) = Ex∼Dl,ˆx∼d(e(G(x)))[∆(x, ˆx)] (1) where θed = [θenc, θdec], ˆx ∼d(e(G(x))) is a reconstruction of the corrupted version of x sampled from the monolingual dataset Dl, and ∆is the sum of token-level cross-entropy loss to measure discrepancy between two sequences. Similarly, we consider teaching the encoderdecoder to reconstruct x in one language from a translation of x in the other language, leading to the following cross-domain objective function: Rcd(θed, l1, l2) = Ex∼Dl1,ˆx∼d(e(T(x)))[∆(x, ˆx)] (2) where (l1, l2) ∈{(src, tgt), (tgt, src)} and T(.) is the current UMT model applied to input document x from language l1 to language l2. 3.4 Language Discriminator Cross-lingual classifiers work well when their input produced by the encoder is language-invariant, as studied in Chen et al. (2018). Thus, we prefer our encoder to map input documents from both languages into a shared feature space independent of languages. To achieve this goal, we follow Chen et al. (2018); Lample et al. (2018a) and introduce a language discriminator into our model, which is a feed-forward neural network with two hidden layers and one softmax layer to identify the language source from the encoder’s output. In particular, we minimize the following cross-entropy loss function: LD(θD|θenc) = −E(l,x(l))[log PD(l|e(x(l))] (3) where θD denotes parameters of the discriminator, (l, x(l)) corresponds to language and document pairs uniformly sampled from monolingual datasets, and PD(.) is the output from the softmax layer. Meanwhile, the encoder is trained to “fool” the discriminator: Ladv(θenc|θD) = −Ex(li)∼Dli[log PD(lj|e(x(li))] (4) with lj = l1 if li = l2, and vice versa. 3.5 Multi-view Classifier Thus far, we have described how we obtain a language-invariant latent space to encode two languages, which may not be sufficient to generalize well across languages if we simply train a classifier on the encoder’s output for the source language (Chen et al., 2018). One key difference between Chen et al. (2018) and our work is that we use UMT (Lample et al., 2018a), which can generate multiple views for the input labeled documents from the source language. We can thereby benefit from multi-view learning’s superior generalization capability over single-view learning (Zhao et al., 2017). Particularly, we consider two views of input: (i) the encoded labeled documents from the source language; (ii) the encoded back-translations of the source documents from the target language. Our learning objective is to train the classifier to match predicted document labels with ground truth from the source language and to encourage two predictive distributions on the two views to be as similar as possible. We consider the following objective function: LC(θC, θed) = E(x,y)[∆(y, Pθc(e(x))) + DKL(Pθc(e(x)) || Pθc(e(T(x)))] | {z } Two views’ consensus (5) where (x, y) ∼{DL src, yL src}, DKL(. || .) is KL Divergence to measure the difference between two distributions, y is the class label of input document x and θc are parameters of classifier. Following previous studies in text classification (Devlin et al., 2019), we use the first token’s representation in the last hidden layer from the transformer encoder as the document representation vector. The classifier is a feed-forward neural network with two hidden layers and a softmax layer. The final objective function at one iteration of our learning algorithm is to minimize the following loss function: Lall = LC + λwd × (Rwd src + Rwd tgt) (6) + λcd × (Rcd src + Rcd tgt) + λadv × Ladv where λwd, λcd, λadv are hyper-parameters to trade-off among within-domain loss, the crossdomain loss and the adversarial loss, respectively. 3.6 Training Algorithm Our model relies on an initial translation machine T (0), which provides a translation from one lan5764 guage to another for calculating the cross-domain loss in Eq. (2) and classifier loss in Eq. (5). To accelerate the training, we initialize T (0) by pretraining a transformer-based UMT (Conneau and Lample, 2019) for certain steps with the same encoder-decoder architecture as our model on monolingual Wikipedia text. After pretraining, we use the pretrained encoder-decoder network to initialize our model and start training the classifier and the discriminator. Meanwhile, we refine the encoder and the decoder on monolingual data and labeled data from the source language. During each training step, the optimization iterates from updating θD in Eq. (3) to updating θed and θC in Eq. (6). Note that if a batch of documents drawn from monolingual data are all unlabeled, then we suspend updating classifier parameters and only update the parameters of the language discriminator and encoder-decoder. In Algorithm 1, we provide a detailed procedure. Algorithm 1 The proposed MVEC algorithm. 1: procedure TRAINING(Dsrc, Dtgt, yL src) Dsrc and Dtgt: monolingual datasets, yL src: labels in the source language. 2: T (0) ←pretrain a transformer based UMT using (Conneau and Lample, 2019); 3: for t = 0, · · · , max epoch do 4: Using T (t) to translate each document in a batch; 5: θD ←argmin LD in Eq. (3) while fixing θC, θed; 6: θC, θed ←argmin Lall in Eq. (6) while fixing θD; 7: Update T (t+1) ←{e(t), d(t)}; 8: return θC, θenc 9: End procedure 4 Experiment We conduct experiments on cross-lingual multiclass and binary sentiment classification using five language pairs involving 11 tasks. More specifically, English is always the source language, and the target languages are French, German, Japanese, Chinese, and Arabic, respectively. 4.1 Datasets Amazon Review (French, German, Japanese). This is a multilingual sentiment classification dataset (Duh et al., 2011) in four languages, including English (en), French (fr), German (de), and Japanese (ja), covering three products (book, DVD, and music). For each product in each language, there are 2000 documents in each of the training and test sets. Each document contains a title, a category label, a review, and a 5-point scale star rating. Following Xu and Yang (2017); Chen and Qian (2019), we convert multi-class ratings to binary ratings by thresholding at 3-point. For each product, since the test set in English is not used, we combine the English training and test sets and randomly sample 20% (800) documents as the validation set to tune hyper-parameters, and use the rest 3200 samples for training. For each target language, we use the original 2000 test samples for comparison with previous methods. Unlike Chen et al. (2018); Chen and Qian (2019) that used labeled data in the target language for model selection, we only use the labels of reviews in the target language for testing. There are 105k, 58k, 317k, 300k unlabeled reviews for English, French, German and Japanese, respectively, which can be used as monolingual data to train the encoder-decoder of our model. Yelp and Hotel Review (Chinese). This dataset is from two sources: (i) 700k Yelp reviews in English with five classes from Zhang et al. (2015), and (ii) 170k hotel reviews in Chinese segmented and annotated with five classes from Lin et al. (2015). Following the same setup in Chen et al. (2018), we split all Yelp reviews into a training set with 650k reviews and validation set with 50k reviews. The 650k review contents are also served as the monolingual training data for English. For Chinese hotel review data, we sample 150k reviews as the monolingual training set. The rest 20k reviews are treated as the test set. Social Media Posts (Arabic). The BBN Arabic Sentiment dataset is from Mohammad et al. (2016). There are 1200 documents from social media posts annotated with three labels (negative, neutral, positive) in the data. The original dataset was split into half as training and the other half as testing. Since we do not need validation data in the target language to tune the model, we randomly sample 1000 documents as test data. For English resource, we still use Yelp reviews and follow the same split as the Chinese case, but convert 5 level reviews into 3 levels2. Also, we randomly sample 21,2 →negative, 3 →neutral, 4,5 →positive 5765 161k sentences from the United Nations Corpus Arab subset (Ziemski et al., 2016) as unlabeled monolingual data for our model training. 4.2 Experiment Setting For French, German and Japanese, we perform binary classification. For Chinese and Arabic, we perform multi-class classification. Data Preprocessing. Following Lample et al. (2018c), we extract and tokenize monolingual data of each language using Moses (Koehn et al., 2007). Then we use the neural machine translation for rare words with subword units, named fastBPE (Sennrich et al., 2016) in three steps. In detail, BPE code is collected from the pretrained XLM-100 models (Conneau and Lample, 2019), then applied to all tokenized data and used to extract the training vocabulary. To constrain our model size, we only keep the top 60k most frequent subword units in our training set. Finally, we binarize monolingual data and labeled data for model training, validation and testing. Pretraining Details. As mentioned earlier, our model depends on an initial translation machine to compute reconstruction loss and classifier loss. We leverage pretrained language models (Conneau and Lample, 2019) to initialize a transformerbased UMT (Lample et al., 2018a) and train it on Wikipedia text3. In particular, we sample 10 million sentences from each language pairs and use the XLM library4 to train a UMT (Lample et al., 2018a) for 200K steps. The resulting encoderdecoder are used to initialize our model. Regarding word embedding initialization, we use the embeddings obtained from the 1st layer of pretrained language models (Conneau and Lample, 2019), which has demonstrated better crosslingual performance in a number of evaluation metrics over MUSE (Lample et al., 2018b). Training Details. In our experiment, both encoder and decoder are 6 layer transformers with 8head self-attention. We set both subword embedding and hidden state dimension to 1024 and use greedy decoding to generate a sequence of tokens. The encoder-decoder and classifier are trained using Adam optimizer (Kingma and Ba, 2015) with a learning rate of 10−5 and a mini-batch size of 32. We set the hidden dimension to 128 for both clas3http://dumps.wikimedia.org/ 4www.github.com/facebookresearch/XLM sifier and discriminator. For parameters of denoising auto-encoder, we set pd = 0.1, pb = 0.2 and k = 3 following Lample et al. (2018a). Finally, we perform a grid search for hyper-parameters on {0.5,1,2,4,8} and set λwd, λcd to 1 and λadv to 4. To prevent gradient explosion, we clip the gradient L2 norm by 5.0. Our approach is implemented in PaddlePaddle5 and all experiments are conducted on an NVIDIA Tesla M40 (24GB) GPU. Competing Methods. We have compared our method with several recently published results. Due to the space limit, we briefly introduce several representative baselines: LR+MT translated the bag of words from target language to source language via machine translation and then built a logistic regression model. BWE baselines rely on Bilingual Word Embeddings (BWEs), wherein 1to-1 indicates that we are only transferring from English, while 3-to-1 means the training data from all other three languages. CLDFA (Xu and Yang, 2017) was built on model distillation on parallel corpora with adversarial feature adaptation technique. PBLM (Ziser and Reichart, 2018) used bilingual word embeddings and pivot-based language modeling for cross-domain & cross-lingual classification. MBERT (Devlin et al., 2019) and XLM-FT (Conneau and Lample, 2019) directly fine-tuned a single layer classifier based on pretrained LM multilingual BERT and XLM. 4.3 Experiment Results In Table 1 and Table 2, we compare our method with others based on their published results or our reproduced results from their code. Our results are averaged based on 5 rounds of experiment with the standard deviation around 1%-1.5%. Following previous baselines, we do not report them here. Our first observation from Table 1 is that our model and the fine-tuned multilingual LM MBERT (Devlin et al., 2019) and XLM-FT (Conneau and Lample, 2019) outperform all previous methods including the methods with cross-lingual resources for 8/9 tasks by a large margin, which indicates the huge benefit from pretrained LMs in the zero-shot setting. Compared with MBERT and XLM-FT, our model obtains better performance when the target language is more similar to the source language, for example, German and French, and one task in Japanese. 5http://www.paddlepaddle.org/ 5766 German (2) French (2) Japanese (2) Approach books DVD music avg books DVD music avg books DVD music avg With cross-lingual resources LR+MT 79.68 77.92 77.22 78.27 80.76 78.83 75.78 78.46 70.22 71.30 72.02 71.18 CR-RL1 79.89 77.14 77.27 78.10 78.25 74.83 78.71 77.26 71.11 73.12 74.38 72.87 Bi-PV2 79.51 78.60 82.45 80.19 84.25 79.60 80.09 81.31 71.75 75.40 75.45 74.20 CLDFA3 83.95 83.14 79.02 82.04 83.37 82.56 83.31 83.08 77.36 80.52 76.46 78.11 With implicit cross-lingual resources UMM4 81.65 81.27 81.32 81.41 80.27 80.27 79.41 79.98 71.23 72.55 75.38 73.05 PBLM5 78.65 79.90 80.10 79.50 77.90 75.65 75.95 76.50 Without cross-lingual resources BWE (1-to-1) 76.00 76.30 73.50 75.27 77.80 78.60 78.10 78.17 55.93 57.55 54.35 55.94 BWE (3-to-1) 78.35 77.45 76.70 77.50 77.95 79.25 79.95 79.05 54.78 54.20 51.30 53.43 MAN-MoE6 82.40 78.80 77.15 79.45 81.10 84.25 80.90 82.08 62.78 69.10 72.60 68.16 MBERT7 84.35 82.85 83.85 83.68 84.55 85.85 83.65 84.68 73.35 74.80 76.10 74.75 XLM-FT8 86.85 84.20 85.90 85.65 88.1 86.95 86.20 87.08 80.95 79.20 78.02 79.39 MVEC (Ours) 88.41 87.32 89.97 88.61 89.08 88.28 88.50 88.62 79.15 77.15 79.70 78.67 1 Xiao and Guo (2013) 2 Pham et al. (2015) 3 Xu and Yang (2017) 4 Xu and Wan (2017) 5 Ziser and Reichart (2018) 6 Chen and Qian (2019) 7 Devlin et al. (2019) 8 Conneau and Lample (2019) Table 1: Prediction accuracy of binary classification in the test set for three language pairs. The highest performance is in bold, while the highest performance within the method group is underlined. Approach Chinese (5) Arabic (3) LR+MT 34.01 51.67 DAN 29.11 48.00 mSDA 31.44 48.33 ADAN 42.49 52.54 MBERT 38.85 50.40 XLM-FT 42.22 49.50 MVEC (Ours) 43.36 49.70 Table 2: Prediction accuracy of 5-class and 3-class classification tasks on the test set. In Table 2, we show the comparison between our method and a few other published results, including ADAN (Chen et al., 2018) and mSDA (Chen et al., 2012) for Chinese and Arabic languages in multi-class setting. Similarly, our model obtains slightly better accuracy in Chinese. Overall, built on top of the pretrained LMs and UMT, our full model achieves the state-of-theart performance on 8/11 sentiment classification tasks, especially when the target language is more similar to the source language. Moreover, we illustrate the effectiveness of encoder-decoder based regularization in reducing the language shift in the shared latent space. Intuitively, if the fine-tuned latent space is less sensitive to the language shift, the performance on validation sets and test sets should be highly correlated during training. In Figure 2, we report the average accuracy of both validation and test set w.r.t. training epochs over five runs on Amazon book review data in French. 0 2 4 6 8 10 12 Epoch index 70 75 80 85 90 95 Accuracy (%) MVEC valid-en test-fr 0 2 4 6 8 10 12 Epoch index 70 75 80 85 90 95 Accuracy (%) XLM-FT valid-en test-fr Figure 2: Validation and test accuracy w.r.t. training epochs for Amazon book review in French. Left: our method (MVEC). Right: XLM-FT. From Figure 2, we observe that even though our model’s best validation accuracy is lower than XLM-FT (Conneau and Lample, 2019) in English, it has more correlated accuracy curves than XLM-FT across English and French. For example, the validation accuracy of XLM-FT starts decreasing after epoch 10, while the test accuracy is still increasing. Such an observation shows that the latent representation learned solely from self-supervised objectives (e.g., masked language modeling) may not well capture the semantic similarity among languages. Hence the resulting classifier may work well in the source language but may not generalize to the target language. In contrast, our model sacrifices some accuracy in the source language but can select better models for the target language in a cross-lingual setting. 4.4 Ablation Study To understand the effect of different components in our model on the overall performance, we con5767 German French Japanese Chinese Arabic Full model: 88.61 88.62 78.67 43.36 49.70 w/o cross-domain loss: 83.22 82.40 72.05 35.74 42.80 w/o within-domain loss: 82.90 82.15 71.27 37.21 41.60 w/o adversarial training: 84.85 84.58 73.75 39.36 46.37 w/o two-views consensus: 86.21 86.18 75.25 40.95 46.77 Table 3: Ablation study on five language pairs. duct an ablation study, as reported in Table 3. Clearly, the encoder-decoder trained either by the within-domain objective or cross-domain objective is the most critical. For Amazon data in three languages (German, French, Japanese), the model without cross-domain loss obtains prediction accuracy of 83.22%, 82.40%, and 72.05%, which gets decreased by 5%−7% compared with the full model. The performance is also significantly degraded when the adversarial training component is removed because the distribution of latent document representations is not similar between two languages. The two-views consensus component also has a significant effect on the performance of our model, with a performance drop up to 5 points for en-jp. Such a result verifies our claim that cross-lingual model benefits from training on multiple views of the input. 4.5 Case Study To further explore the effectiveness of our approach, we visualize the encoder’s output and the last layer before softmax for 10 randomly sampled Amazon reviews in English and their translations in French using Google Translation, as shown in Appendix A.2. As seen in the lower-left panel of Figure 3, most red circles and black squares with the same indices are very close for our method but are distant for XLM-FT in the top-left. Such an observation implies that our encoder combined UMT and a language discriminator adequately maps the input into a shared language-invariant latent space while preserving semantic similarity. For the last layer before softmax, even though XLM-FT also generates reasonable representations to separate positive and negative reviews, the data points are scattered randomly. On the contrary, our model’s output in the lower right panel of Figure 3 shows two more obvious clusters with corresponding labels that can be easily separated. One cluster in the left contains all of the positive documents, while the negative examples only appear on the right side. -200 -100 0 100 200 -100 -50 0 50 100 150 200 250 123+ 4+ 56+ 78+ 9+ 101 2 3 4 5 6 7 8 9 10 -400 -200 0 200 400 -400 -200 0 200 400 123+ 4+ 56+ 78+ 9+ 101 2 3 4 5 6 7 8 9 10 -60 -40 -20 0 20 40 60 -60 -40 -20 0 20 40 60 123+ 4+ 56+ 78+ 9+ 101 2 3 4 5 6 7 8 9 10 Figure 3: t-SNE visualizations of various layers of XLM-FT and MVEC for en-fr. Red circles and black squares indicate documents from English and their corresponding translations in the target language, respectively. Numbers indicate the document index and have a one-to-one mapping. +/- indicates labels and we only annotate English documents for simplicity. Top left: encoder output of XLM-FT. Top right: the last layer before softmax of XLM-FT. Lower left: encoder output of our method. Lower right: the last layer before softmax of our method. 5 Conclusion In this paper, we propose a cross-lingual multiview encoder-classifier (MVEC) that requires neither labeled data in the target language nor crosslingual resources with the source language. Built upon pretrained language models, our method utilizes the encoder-decoder component with a language discriminator from an unsupervised machine translation system to learn a languageinvariant feature space. Our approach departs from previous models that could only make use of the shared language-invariant features or depend on parallel resources. By constructing the finetuned latent feature space and two views of input from the encoder-decoder of UMT, our model significantly outperforms previous methods for 8/11 zero-shot sentiment classification tasks. 5768 References Daniel Andrade, Kunihiko Sadamasa, Akihiro Tamura, and Masaaki Tsuchida. 2015. Cross-lingual text classification using topic-dependent word probabilities. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT), pages 1466–1471, Denver, CO. N´uria Bel, Cornelis H. A. Koster, and Marta Villegas. 2003. Cross-lingual text categorization. In Proceedings of the 7th European Conference on Research and Advanced Technology for Digital Libraries (ECDL), pages 126–139, Trondheim, Norway. Minmin Chen, Zhixiang Eddie Xu, Kilian Q. Weinberger, and Fei Sha. 2012. Marginalized denoising autoencoders for domain adaptation. In Proceedings of the 29th International Conference on Machine Learning (ICML), Edinburgh, UK. Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Q. Weinberger. 2018. Adversarial deep averaging networks for cross-lingual sentiment classification. Trans. Assoc. Comput. Linguistics, 6:557–570. Zhuang Chen and Tieyun Qian. 2019. Transfer capsule network for aspect level sentiment classification. In Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL), pages 547–556, Florence, Italy. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems (NeurIPS), pages 7057–7067, Vancouver, Canada. Alexis Conneau, Holger Schwenk, Lo¨ıc Barrault, and Yann LeCun. 2017. Very deep convolutional networks for text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 1107–1116, Valencia, Spain. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 4171–4186, Minneapolis, MN. Xin Dong and Gerard de Melo. 2019. A robust selflearning framework for cross-lingual text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6305–6309, Hong Kong, China. Kevin Duh, Akinori Fujino, and Masaaki Nagata. 2011. Is machine translation ripe for cross-lingual sentiment classification? In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACLHLT), pages 429–433, Portland, OR. Akiko Eriguchi, Melvin Johnson, Orhan Firat, Hideto Kazawa, and Wolfgang Macherey. 2018. Zeroshot cross-lingual classification using multilingual neural machine translation. Technical report, arXiv:1809.04686. Yanwei Fu, Timothy M. Hospedales, Tao Xiang, and Shaogang Gong. 2015. Transductive multi-view zero-shot learning. IEEE Trans. Pattern Anal. Mach. Intell., 37(11):2332–2345. Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2019. An interactive multi-task learning network for end-to-end aspect-based sentiment analysis. In Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL), Florence, Italy. Phillip Keung, Yichao Lu, and Vikas Bhardwaj. 2019. Adversarial learning with contextual embeddings for zero-resource cross-lingual classification and NER. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1355– 1360, Hong Kong, China. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, and et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL), Prague, Czech Republic. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In Proceedings of the 6th International Conference on Learning Representations (ICLR), Vancouver, Canada. Guillaume Lample, Alexis Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018b. Word translation without parallel data. In Proceedings of the 6th International Conference on Learning Representations (ICLR), Vancouver, Canada. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018c. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5039–5049, Brussels, Belgium. 5769 Yingming Li, Ming Yang, and Zhongfei Zhang. 2019. A survey of multi-view representation learning. IEEE Trans. Knowl. Data Eng., 31(10):1863–1883. Yiou Lin, Hang Lei, Jia Wu, and Xiaoyu Li. 2015. An empirical study on sentiment classification of chinese review using word embedding. In Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation (PACLIC), Shanghai, China. Saif M. Mohammad, Mohammad Salameh, and Svetlana Kiritchenko. 2016. How translation alters sentiment. J. Artif. Intell. Res., 55:95–130. Hieu Pham, Thang Luong, and Christopher D. Manning. 2015. Learning distributed representations for multilingual text sequences. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing (VS@NAACL-HLT), pages 88–94, Denver, Co. Peter Prettenhofer and Benno Stein. 2010. Crosslanguage text classification using structural correspondence learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1118–1127, Uppsala, Sweden. Sara Sabour, Nicholas Frosst, and Geoffrey E. Hinton. 2017. Dynamic routing between capsules. In Advances in Neural Information Processing Systems (NIPS), pages 3856–3866, Long Beach, CA. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), Berlin, Germany. Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of the 5th International Conference on Learning Representations (ICLR), Toulon, France. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing (NIPS), pages 6000–6010, Long Beach, CA. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In Proceedings of the Twenty-Fifth International Conference on Machine Learning (ICML), pages 1096–1103, Helsinki, Finland. Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833–844, Hong Kong, China. Min Xiao and Yuhong Guo. 2013. Semi-supervised representation learning for cross-lingual text classification. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1465–1475, Seattle, WA. Kui Xu and Xiaojun Wan. 2017. Towards a universal sentiment classifier in multiple languages. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 511–520, Copenhagen, Denmark. Ruochen Xu and Yiming Yang. 2017. Cross-lingual distillation for text classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1415– 1425, Vancouver, Canada. Qingheng Zhang, Zequn Sun, Wei Hu, Muhao Chen, Lingbing Guo, and Yuzhong Qu. 2019. Multi-view knowledge graph embedding for entity alignment. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI), pages 5429–5435, Macao, China. Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems (NIPS), pages 649–657, Montreal, Canada. Jing Zhao, Xijiong Xie, Xin Xu, and Shiliang Sun. 2017. Multi-view learning overview: Recent progress and new challenges. Inf. Fusion, 38:43–54. Xinjie Zhou, Xiaojun Wan, and Jianguo Xiao. 2016. Cross-lingual sentiment classification with bilingual document representation learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), Berlin, Germany. Michal Ziemski, Marcin Junczys-Dowmunt, and Bruno Pouliquen. 2016. The united nations parallel corpus v1.0. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC), Portoroˇz, Slovenia. Yftah Ziser and Roi Reichart. 2018. Deep pivot-based modeling for cross-language cross-domain transfer with minimal guidance. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 238–249, Brussels, Belgium. Will Y. Zou, Richard Socher, Daniel M. Cer, and Christopher D. Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1393–1398, Seattle, WA. 5770 A Additional Details on Datasets A.1 Summary Statistics of Labeled Datasets For the Amazon Review dataset, we use the same test set as Duh et al. (2011) but transform them into binary labels for comparison with previous works. After transformation, the test set of each product category has equal number of positive and negative ratings (1000 vs 1000). For the Yelp and Hotel Review dataset, we follow the same split as Chen et al. (2018) and keep the original rating. The test set contains 10k documents in total with around 2000 documents for each rating level. The Arabic social media dataset contains 1000 test documents sampled from 1200 social media posts with about 400 documents for each rating level. Since Arabic data is not used for tuning parameters as validation set, we use more test samples than Chen et al. (2018). A.2 Sampled Data for the Case Study In Section 4.5, we randomly sample 10 Amazon book reviews in English, and translate them into French using Google Translation for case study. The sampled reviews and their French translations are as follows: 1. More than mitigated for this tote album that mixes some good ideas (the parodies of works of art) and scenes that only echo the previous albums lazily. Plus qu’att´enu´e pour cet album cabas qui mˆele quelques bonnes id´ees (les parodies d’oeuvres d’art) et des sc`enes qui ne font que faire ´echo aux albums pr´ec´edents paresseusement. 2. What a disappointment, so dear for that. After the Gallic comeback, another album story to release an album. Beautiful pictures, some cool stuff (so the picture with all the characters) ... but Quelle d´eception, si ch`ere pour c¸a. Apr`es le retour des Gaulois, une autre histoire d’album pour sortir un album. De belles photos, des trucs sympas (donc la photo avec tous les personnages) ... mais 3. We obviously believe we know everything about the unspeakable horror of concentration camps. Well no; if it’s a man; literally leaves no voice! Any comment seems inappropriate and for all Nous pensons ´evidemment que nous savons tout sur l’horreur indicible des camps de concentration. Eh bien non, si c’est un homme ne laisse litt´eralement aucune voix! Tout commentaire semble inappropri´e et pour tous 4. “We who have survived”, said Primo Levi, “are not good witnesses, because we belong to this tiny minority who, by prevarication, by skill or luck, have never touched” “Nous qui avons surv´ecu”, a d´eclar´e Primo Levi, “ne sommes pas de bons t´emoins, car nous appartenons `a cette minuscule minorit´e qui, par tergiversation, par habilet´e ou par chance, n’avons jamais touch´e” 5. The questions are targeted and you must have the financial means to follow the plan. I would not recommend this document unlike the other book; I do not know how to lose weight, which is useful Les questions sont cibl´ees et vous devez avoir les moyens financiers de suivre le plan. Je ne recommanderais pas ce document contrairement `a l’autre livre; Je ne sais pas comment perdre du poids, ce qui est utile 6. I read this book in Spanish, in the native language of the writer. I find the book excellent. Not only because of her passionate story but for her love of books and literature J’ai lu ce livre en espagnol, dans la langue maternelle de l’auteur. Je trouve le livre excellent. Pas seulement `a cause de son histoire passionn´ee, mais aussi pour son amour des livres et de la litt´erature 7. I have been reading Chattam for many years, and this is the first time I have to struggle to finish one of these novels. The bottom of the story is not bad, but the finished product was almost undrinkable. Je lis Chattam depuis de nombreuses ann´ees et c’est la premi`ere fois que je dois lutter pour terminer l’un de ces romans. Le fond de l’histoire n’est pas mauvais, mais le produit fini ´etait presque imbuvable. 5771 8. THIS BOOK IS GREAT! I had seen the movie before I read the book and I was not disappointed! CE LIVRE EST SUPER! J’avais vu le film avant de lire le livre et je n’ai pas ´et´e d´ec¸u! 9. I still love it so much. But I wonder if we will not go around in circles ... We change the scenery, we add endearing characters, but there is already the originality of the original creators. Je l’aime toujours tellement. Mais je me demande si nous ne tournerons pas en rond ... On change de d´ecor, on ajoute des personnages attachants, mais il y a d´ej`a l’originalit´e de la cr´eation originale. 10. There are many mysteries in life, including Grang´e’s! I really do not understand the extraordinary opinions about this author: it’s wrong! And I hope it is not broadcast too much abroad. Il y a beaucoup de myst`eres dans la vie, y compris ceux de Grang´e! Je ne comprends vraiment pas les opinions extraordinaires sur cet auteur: c’est faux! Et j’esp`ere que c¸a ne sera pas trop diffus´e `a l’´etranger
2020
510
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5772–5781 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5772 Efficient Pairwise Annotation of Argument Quality Lukas Gienapp1 Benno Stein2 Matthias Hagen3 Martin Potthast1 1 Leipzig University 2 Bauhaus-Universität Weimar 3 Martin-Luther-Universität Halle-Wittenberg Abstract We present an efficient annotation framework for argument quality, a feature difficult to be measured reliably as per previous work. A stochastic transitivity model is combined with an effective sampling strategy to infer highquality labels with low effort from crowdsourced pairwise judgments. The model’s capabilities are showcased by compiling WebisArgQuality-20, an argument quality corpus that comprises scores for rhetorical, logical, dialectical, and overall quality inferred from a total of 41,859 pairwise judgments among 1,271 arguments. With up to 93% cost savings, our approach significantly outperforms existing annotation procedures. Furthermore, novel insight into argument quality is provided through statistical analysis, and a new aggregation method to infer overall quality from individual quality dimensions is proposed. 1 Introduction For a broad variety of tasks, such as argument mining, argument retrieval, argumentation generation, and question answering, compiling labeled data for argument quality remains an important prerequisite, yet, also a difficult problem. Most commonly, human assessors have been presented with one argument at a time and then asked to assign labels on a graded quality scale ⟨0, 1, 2⟩with label descriptions such as (0) “low quality”, (1) “medium quality” and (2) “high quality” for guidance. In previous work, this was usually done concurrently for multiple orthogonal sub-dimensions of argument quality; judging the overall quality of an argument has been deemed complex (Wachsmuth et al., 2017). But on closer inspection, even the more specialized quality dimensions considered are difficult to be assessed as evidenced by the low reliability scores reported. Especially crowdsourcing suffers from assessors often having different reference frames to base their judgments on and task instructions being nondescript and therefore unhelpful in ensuring consistency. Employing experts, however, not only comes at a significantly higher cost per label; despite their expertise, even experts did not achieve more reliable judgments. We pursue an alternative approach: stochastic transitivity modeling based on pairwise judgments of arguments. This enables the employment of laymen; the decisions required from them remain comparably simple and expect neither prior knowledge nor a common reference frame, while the labels that can be derived from their judgments still exhibit a high accuracy and informativeness. Though pairwise judgment has already been considered for assessment of argument quality (Habernal and Gurevych, 2016; Toledo et al., 2019), its significant cost overhead has hindered widespread application. We explore the lower bound of effort needed to infer labels of sufficient quality. We combine a pairwise model with a highly effective offline sampling strategy to minimize the set of needed pairwise judgments, saving up to 93% of the effort of an exhaustive comparison. As part of this work, we release the Webis Argument Quality Corpus 2020, which includes a total of 41,859 pairwise judgments between 1,271 arguments across the three dimensions of rhetorical, logical, and dialectical quality. Further, inferred scalar scores for the three dimensions as well as overall quality and topic relevance are provided, alongside a reference implementation of our model.1 Carrying out a first analysis of the statistical properties of the corpus, we validate both the new annotation method and the corpus by drawing comparisons to previous work. Since judging overall quality by itself is a difficult task, based on our statistical analysis, we find that euclidean vector length adequately combines scores from the three aforementioned specialized quality dimensions into a single overall quality score. 1Resources: https://webis.de/publications.html?q=ACL+2020 Corpus: https://zenodo.org/record/3780049 Code base: https://github.com/webis-de/ACL-20 5773 2 Related Work Wachsmuth et al. (2017) surveyed many facets of argument quality that are distinguished in argumentation theory, organizing them within three major dimensions: logical quality (the argument’s structure and composition), rhetorical quality (persuasive effectiveness, vagueness, and style), and dialectical quality (contribution to the discourse). Further, they built the first comprehensive argument quality corpus, tasking three experts with annotating arguments with respect to all 15 (sub-)dimensions in their taxonomy. Each dimension has been annotated on a scale from 1 (low) to 3 (high), reaching Krippendorff’s α values between 0.26 and 0.51, depending on the quality dimension. Despite a rigorous setup, the low agreement is evidence that even experts have difficulties to reliably judge argument quality. Potthast et al. (2019) explore the use of argument quality as an evaluation criterion beyond relevance for argument retrieval, thus needing to collect quality judgments for their evaluation task. Based on the taxonomy of Wachsmuth et al., they had the three major dimensions annotated on graded scales ranging from 1 (low) to 4 (high), reproducing the findings of Wachsmuth et al. They recruited highly educated students of at least bachelor’s level education from a national foundation for gifted students who have a strong interest in societal issues. Still, reliable annotation was difficult to achieve due to the highly subjective, complex, and nuanced nature of argument quality. Each study operates on rather small amounts of data; they only annotate 320 (Wachsmuth et al., 2017) and 437 (Potthast et al., 2019) individual arguments. Both setups become nonviable for larger annotation tasks, since the associated labor costs in such (semi-)expert studies are usually high. This is not an issue in crowdsourced settings, where judgments can be collected in abundance for a comparatively cheap price. However, the problem of annotation quality is more severe here: argument quality might be even more difficult to judge without prior domain-specific knowledge, creating the need for annotation frameworks that can still maintain a sufficiently high data quality. Judging from the agreement scores given by Wachsmuth et al. and Potthast et al., obtaining reliable data using classic graded scales proves infeasible, an effect that should be even more pronounced in a crowdsourced setting. Swanson et al. (2015) measure an arguments’ quality as the amount of context or inference required for it to be understood, describing an annotation setup where assessors judge seven individual quality dimensions on a 0-1-slider. Recruiting assessors on Amazon Mechanical Turk (MTurk), they use intra-class correlation to estimate inter-rater agreement, with an average value of 0.42 over all topics, thus also indicating a poor reliability (Portney et al., 2009). They further observe a correlation with sentence length, prompting them to remove all sentences shorter than four words. All three studies indicate that absolute rating (i.e., having assessors label a single argument on a given absolute scale without the context of other arguments) performs unfavorably. This rating method, also known as Likert scale or Mean Opinion Score, is known to have two major drawbacks (Ye and Doermann, 2013): (1) Absolute rating is often treated as if it produces data on an interval scale. However, assessors rarely perceive labels as equidistant, thus producing only ordinal data. This leads to a misuse of statistical tests and results in low statistical power of subsequent analyses. (2) Absolute rating is difficult for assessors without prior domain knowledge, since they may be unsure which label to assign. This results in noisy, inconsistent, and unreliable data. As an alternative, preference rating (i.e., a relative comparison by showing two arguments to an assessor and letting them declare their preference towards one of them) has been considered by Habernal and Gurevych (2016), who compile an exhaustive set of pairwise comparisons to infer labels for argument convincingness. For 1,052 arguments on 32 issues, each of the over 16,000 total comparisons was annotated by five different crowd workers on MTurk. While no α statistics are provided, the authors do conclude that preference ratings in a crowdsourced setting are sufficiently accurate, since the best-ranked rater for each pair achieves 0.935 accuracy compared to a gold label. The indicated reliability of pairwise annotation for argument quality is further corroborated by Toledo et al. (2019), who compile a large dataset of about 14,000 annotated argument pairs, and absolute ratings in the 0-1-range for about 6,300 arguments. Pairwise annotations were made in regard to the overall quality of arguments, operationalized as “Which of the two arguments would have been preferred by most people to support/contest the 5774 topic?” Using a strict quality control, they show that the annotated relations consistently reproduce the direction implied by absolute ratings. Yet, annotating quality as a single feature is problematic, since (1) it is hard to capture the multi-facet nature of argument quality in that way (Wachsmuth et al., 2017) and the chosen operationalization is similar to the facet of dialectical quality, neglecting the other major two; and (2) scores for individual quality dimensions are warranted for in-depth training and evaluation for a broad range of argumentation technology (Potthast et al., 2019). Although preference rating seems promising based on the reported reliability, it creates the need for a model that infers score labels from the collected comparison data. Habernal and Gurevych propose the use of PageRank (Page et al., 1999). This is problematic, since cycles in the comparison graph may form rank sinks, distorting the latent rankings. Habernal and Gurevych deal with this problem by constructing a directed acyclic graph (DAG) from the collected data prior to applying PageRank, assuming that argument convincingness exhibits the property of total order. However, no prior evidence for this property is apparent. Simpson and Gurevych (2018) note further problems with PageRank and propose the use of Gaussian process preference learning instead, demonstrating a high scalability. However, for a practical approach, an effective strategy to minimize the number of needed comparisons is warranted, since, to build the DAG, exhaustive comparison data is required. This is inefficient; at worst n 2  comparisons have to be obtained for n arguments. Also, no data was collected on how the PageRank method performs on incomplete or sparse comparison data. Chen et al. (2013) also propose an online sampling strategy based on the Bradley-Terry model (Bradley and Terry, 1952). They implement an online Bayesian updating scheme, which, contrary to previous work such as presented by Pfeiffer et al. (2012), does not require retraining the whole model when new comparisons are added. After each comparison added to the total set of annotated pairs, they identify the next pair to be compared by calculating which comparisons would reduce the overall model uncertainty the most. Simpson and Gurevych (2018) opt for a similar active learning approach, but note that that it is prone to overfitting, causing accuracy to decrease. While online learning uses an approximately minimal amount of comparisons, additional drawbacks besides overfitting can be noted: (1) The updating scheme diminishes the reusability of the collected data, since such a specific method of choosing pairs introduces data bias for other applications. (2) Online sampling is complicated to implement on a crowdsourcing platform, preventing multiple workers from making judgments in parallel. (3) In the case of Chen et al. (2013), the model is not equipped to handle comparison ties, i.e., an assessor declaring no preference. Yet, ties frequently occur in real-world annotation tasks. Overall, the Bradley-Terry model appears to be a promising candidate for our purposes: its robustness and statistical properties have been studied in great detail (Hunter, 2004), and it can be efficiently computed (Chen et al., 2013). However, an alternative offline sampling method has to be formulated, which we introduce in the following section. 3 Pairwise Quality Annotation In this section, we define a model to aggregate pairwise judgments into scalar ranking scores and combine different sampling strategies to form a highly efficient annotation framework. 3.1 The Bradley-Terry Model Let D = {d1, . . . , dn} denote a set of n items (e.g., arguments) for which a latent ranking is assumed according to a scale-invariant set Γ = {γ1, . . . , γn} of real-valued “merits”, where the i-th item di has merit γi. When independently comparing pairs of items (di, dj) from D, the probability of item di beating item dj is defined as follows: P(di ≻dj) = γi γi + γj . (1) Using exponential score functions pi = eγi reduces the model to a logistic regression on pairs of individuals (Agresti, 2003): P(di ≻dj) = pi pi + pj . (2) The merits Γ can thus be inferred with maximum likelihood optimization (Hunter, 2004) and the following log-likelihood equation for a pool of pairwise comparisons C, a multiset of pairs (i, j), where i and j are drawn from [1, n]: L(Γ, C) = X (i,j)∈C log P(di ≻dj). (3) 5775 3.2 Incorporating Ties Pairs of items (di, dj) may exist whose merit difference is below a threshold τ so that assessors cannot decide which is better. Rao and Kupper (1967) incorporate such ties into the model as follows: P(di ≻dj) = pi pi + pjθ (4) for the probability of preference of di over dj, and P(di ≈dj) = pipj(θ2 −1) (pi + pjθ)(piθ + pj) (5) for the probability of no preference between the two, where θ = eτ. For τ = 0, i.e., assessors being able to differentiate every item pair, these equations reduce to the standard Bradley-Terry model. 3.3 Regularization The maximization is guaranteed to converge to the unique maximum likelihood estimator in finite steps under the assumption that in every possible partition of the items into two nonempty subsets, some subject in the second set beats some subject in the first set at least once (Hunter, 2004). Thus, a pairwise comparison experiment is restricted in two ways: (i) The matrix formed by the comparisons must construct a strongly connected graph; (ii) The comparisons between the partitions cannot all be won by subjects from the same group, i.e., no item has losses or wins exclusively. Even though the adherence becomes asymptotically likely given an appropriate experiment design (Yan et al., 2011), the problem can be regularized to increase robustness. The regularization term R(Γ) = n X i=1  log  e1 e1 + pi  + log  pi pi + e1  , (6) weighted by a regularization parameter λ, is added to model a dummy item d0 with merit γ0 = e1, which is defined to compare against every item with exactly one win and one loss (Chen et al., 2013). Convergence is now ensured as the graph is guaranteed to be strongly connected. Additionally, the merits Γ are no longer scale-invariant, since the merit of the dummy item is fixed at 1. 3.4 Log-Likelihood Maximization The log-likelihood equation, with regularization parameter λ and merit threshold τ takes the form L(Γ, τ, λ, C) = X (i,j)∈C log "( P(di ≻dj) if di ≻dj P(di ≈dj) if di ≈dj # + λR(Γ). (7) Γ is initialized with 1’s by convention. Chen et al. (2013) propose λ ∈[0.1, 10], inferring rankings similar to the unregularized problem for sufficiently small values, while regularized rankings for larger λ values often outperform the baseline for a broad range of applications. The maximization was solved using BFGS optimization. 3.5 Sparsification Sampling strategies are needed to reduce the amount of comparisons as obtaining an exhaustive set of n 2  comparisons becomes infeasible with larger item counts. Nevertheless, sampling strategies should preserve a high annotation quality. Burton (2003) describes a strategy where items are arranged in a cyclical way. A main feature is that each item is required to appear in the same number of pairs in order to gain the same amount of information about each item. For a random permutation of the items in D, the i-th is compared with the (i+1)-th item for i < n, and item dn with item d1, thus completing the cycle. This can be generalized to higher step sizes s: for instance, if s = 2, all items that are separated by two positions around the ring are compared. However, this strategy suffers from the major drawback that for some step sizes, the resulting graph has multiple unconnected components, thus violating the restriction that the comparison matrix must form a strongly connected graph. Therefore, complex combinations of different step sizes are needed, resulting in needlessly complicated experimental setups. Alternatively, Yan et al. (2011) proposed a method of sparse grouped comparisons, where the set of all items D is partitioned into m equisized disjoint subsets Dk, where k ∈[1, m], so that the following constraints hold true: (i) for each Dk, (i, j) ∈C when di, dj ∈ Dk, i ̸= j, and (ii) (i, j) ∈C when di ∈Dk, dj ∈Dk+1 for k = 1, . . . , m −1. 5776 0 20 j 0 10 20 30 i k = 2 0 20 j k = 4 0 20 j k = 8 0 20 j k = 16 0 20 j k = 32 Figure 1: Comparison matrices for n = 32 and different values of k. Variables i and j denote the matrix indices as used in the constraints. Figure 2: Example cyclic group design for six groups of four items. However, in this approach not every item has the same amount of comparisons. To make the strategy of Yan et al. (2011) consistent with this requirement, both strategies can be combined to derive a cyclical group by also including comparisons between group D1 and group Dk, as reflected in the additional constraint (iii) (i, j) ∈C when di ∈D1, dj ∈Dm. This way, the design adheres to the principle that every item should have the same number of comparisons but the overall construction of the experiment remains simple. All combinations of items in the same group, and the Cartesian product of adjacent groups are included. Therefore k· n/k 2  intra-group comparisons and k · n k 2 inter-group comparisons are needed. Thus, the total amount of comparisons c is c = k n k 2 + n/k 2 ! = 3n2 2k −n 2 . (8) If multiple independent judgments per unique comparison are collected, as is frequently done in crowdsourcing, c has to be multiplied by a factor x denoting how many unique judgments are collected per comparison. Figure 2 shows an exemplary cyclic group design for six groups of four items. Shown on the left is the overall design, with intra-group comparisons (Constraint (i)) depicted in the middle, and inter-group comparisons between adjacent groups (Constraints (ii) and (iii)) on the right. Example comparison matrices for n = 32 and different values of k are shown in Figure 1 to provide a visual intuition of the sampling process. Although the comparison matrix is inherently symmetric, to reflect the true count of comparisons, only the lower half is depicted. All comparisons introduced by Constraint (i) are colored in light gray, Constraint (ii) in medium gray, and dark gray for Constraint (iii). Note that for the special case of k = 2, Constraints (ii) and (iii) are equal. 3.6 Model Evaluation To test the accuracy trade-off between exhaustive comparison and sparse comparison on real-world data, twenty topics of 32 arguments each were randomly selected from the UKPConvArg1 corpus (Habernal and Gurevych, 2016), which includes an exhaustive pairwise comparison set for argument convincingness. With each comparison having five independent annotations, a baseline was established by fitting the model on the full set. Then, different values for k and x were used to sample a subset of the comparisons and the proposed model was fitted with each of the sampled comparison sets. The obtained merit ranking was compared against the baseline ranking using Pearson’s ρ, with confidence intervals calculated using bootstrapping (n = 10, 000). For each of the resulting rankings, the amount of used comparisons and the correlation with the baseline ranking are detailed in Table 1. The following interesting properties are apparent: (1) Collecting multiple judgments per unique comparison, as is usual practice in methods relying on graded scales is not sufficiently beneficial. Increasing the annotation effort by factor 5 (from x = 1 to x = 5) results in only a minimal gain in ranking accuracy of 0.06 for k = 4. For higher sampling rates, decreasing k yields a larger net increase 5777 x k Our approach Judgments Judgments % ¯ρ 95% CI 2 2480 100 1.00 1.00 1.00 4 1904 76 0.99 0.99 1.00 5 8 944 38 0.96 0.95 0.97 16 464 18 0.88 0.85 0.90 32 224 9 0.67 0.61 0.72 4 1520 61 0.99 0.99 0.99 4 8 752 30 0.95 0.94 0.96 16 368 14 0.86 0.83 0.88 32 176 7 0.64 0.60 0.69 2 1488 60 0.99 0.99 0.99 4 1136 45 0.98 0.98 0.99 3 8 560 22 0.93 0.82 0.95 16 272 10 0.82 0.79 0.85 32 128 5 0.65 0.60 0.69 2 992 40 0.98 0.97 0.99 4 752 30 0.97 0.96 0.97 2 8 368 14 0.91 0.89 0.93 16 176 7 0.78 0.75 0.81 32 80 3 0.59 0.56 0.63 2 496 20 0.95 0.94 0.96 4 368 14 0.92 0.90 0.93 1 8 176 7 0.82 0.79 0.86 16 80 3 0.66 0.61 0.71 32 32 1 0.47 0.40 0.54 Table 1: Our model’s performance under sparsification. x denotes the number of judgments collected per unique comparison, k the group factor along the resulting total number of judgments in absolute and relative terms, compared to an exhaustive comparison and corresponding ρ-correlations with the baseline ranking. in ranking accuracy than increasing x, at the same cost. By example, going from x = 1, k = 16 to x = 1, k = 4 ends up at the same number of comparisons as x = 2, k = 8, but has a slightly higher ranking accuracy. Therefore, it is more economical to increase the sampling rate until the required accuracy is met than collecting multiple judgments. (2) The proposed model and comparison strategy are able to produce near-perfect rankings (x = 1, k = 4, ¯ρ = 0.92±0.02) using only 14%, and acceptable rankings (x = 1, k = 8, ¯ρ = 0.82 ± 0.04) using only 7% of the full comparison set. This significant reduction is a promising sign for employing our model in crowdsourced studies at scale. However, the specific choice of k depends on scale and domain of the data as well as trustworthiness of comparisons. Therefore, we refrain from making a general suggestion for the choice of k. Thus, if the model is to be adapted to drastically different domains or item counts, exploratory studies are advised to estimate the quality tradeoff for a specific use case. 4 The Webis Argument Quality Corpus To maximize its usefulness, the sample of arguments for our argument quality corpus was drawn from the recently published args.me corpus (Stein and Wachsmuth, 2019), a collection of 387,606 arguments crawled from various debate portals. To ensure some topic diversity and relevance of the arguments to the topic, while keeping the amount of judgments within our budget limits, we (1) indexed the args.me corpus using three retrieval models from the Terrier information retrieval library (Ounis et al., 2006) (namely BM25, DPH, and DirichletLM), (2) retrieved texts for 20 topic queries at a depth of 50 texts per topic per model, (3) and pooled all 3000 retrieved texts to remove overlap between the different models. In total, 1,610 unique spans of text remained for the 20 topics. Using Amazon’s Mechanical Turk, in a first preprocessing step, we tasked crowd workers with deciding whether or not a given retrieved item actually contained argumentative text. Each text was judged by five crowd workers, using majority vote as a decision rule. To ensure quality, we recruited only workers for the task with an approval rate of at least 95%, like Swanson et al. (2015). Of the 1,610 input texts, 339 were flagged as nonarguments. Most of these texts are noise resulting from using debate platforms as data source; examples include statements of acceptance for a debate (“I accept this debate on [Topic] and will go first [...]”), statements with no argumentative value (“I think [Topic] is good.”), definitions, and in some cases even jokes or personal attacks. 1,271 arguments remained for quality annotation. In a second step, all remaining arguments were annotated for argument quality via a sample of pairwise judgments on which we applied our model. For each pair of arguments, a crowd worker was tasked to select the one that exhibits a higher quality compared to the other with regard to a given description of the respective quality dimensions. The annotation was repeated separately for each of 20 topics and each of the three aforementioned quality dimensions. To make the task accessible to workers without prior knowledge of argumentation theory, the quality dimensions were operationalized as follows: “Which text has the better logical structure?” (logical quality), “Which text has the better style of speech?” (rhetorical quality), and “Which text would be more useful in a debate?” (dialectical quality). Examples were given to annotators as guidance. Table 2 shows exemplary arguments for each of the three quality dimensions alongside a brief explanation why this argument lacks the specified quality. In each task, one such 5778 Dimension Argument Explanation Rhetorical quality “Gender is a social construct cuse we are told when we are first born by a dude what gender but if he didnt tell us that we woudnt have a gender its only cuse he told us that gender that we are that gender.” This argument is of low rhetorical quality, as it lacks proper sentence structure, uses informal speech, has typos, and its use of ellipsis makes it hard to follow. Logical quality “I support an abortion ban. We must not forget that abortion opposes the principle of sanctity of life. Women are blessed with the gift of giving birth to another life and hence, should accept it with responsibility.” Even though this argument has a clearly stated claim, the evidence used to support it is insufficient. Key concepts are not defined (What is ’sanctity of life’? Why does it apply to unborn fetuses?) and the conclusion (’Women should accept it with responsibility.’) does not necessarily follow from the evidence. Dialectical quality “Banning abortion would mean that there is more people in the world. This leads to overpopulation which is a major problem and 842 million people are undernourished every year. More people only causes more problems.” This argument is not very convincing since the evidence (overpopulation) presented in support of the conclusion (abortion ban) is not very relevant to the issue. It can easily be invalidated by, for example, offering better solutions to overpopulation than abortion. Thus, the argument does not make a meaningful contribution to resolving the debate conflict. Table 2: Example arguments from the args.me corpus with accompanying explanation for why each argument lacks the specified quality. negative example as well as one positive example was provided with explanations, ensuring no topic overlap between annotated material and example. Five comparisons were presented together as one task. The comparison sets of five were compiled randomly to minimize order effects. A cyclic group comparison strategy as described in Section 3.5 with k = 8 was employed, with each pair annotated by one worker. On average, a topic pooling consists of n = 64 unique arguments, with csampled = 698 and cexhaustive = 2, 043. The mean sample rate compared to the exhaustive comparison set therefore is 0.342. Erring on the safe side, we chose to maximize correlation with what can be expected from an exhaustive comparison as guided by Table 1. In total, 2,797 HITs were carried out. A reward of $0.08 per HIT was paid, amounting to $268.54 per quality dimension and $805.54 total while ensuring an hourly rate of at least $8. In comparison to the setup of Habernal and Gurevych (2016), who carried out an exhaustive comparison with a factor of x = 5 crowd workers per pair, the annotation effort for our study could be reduced by 93.17% based on our model. Nevertheless, if a higher accuracy is deemed necessary in future experiments, our comparison set can easily be extended by adding additional votes per comparison or by increasing the group size. Compared to a traditional annotation setup using graded scales, having five workers annotate each item on a scale from 0 (low) to 4 (high) would put the total annotation cost for one quality dimension at around $150.00, supposing a reward of $0.08 per HIT. Although cheaper, the annotation quality would be much lower as per the reliabilities reported in previous work. Moreover, the highly increased level of detail in the quality scores produced by our new approach is worth the extra cost. In cases where annotation quality is not as important, sampling at higher values of k still achieves acceptable correlation scores, rendering our method even cheaper than the traditional approach. 5 Corpus Analysis In this section, we carry out a statistical analysis of our new corpus. First, we study the distribution and the correlation effects between the different quality dimensions, and between the quality dimensions and text length, to draw comparisons to the prior work of Wachsmuth et al. (2017). Then, we explore the hypothesis of overall quality being a latent variable by analyzing the influence of quality dimensions. 5.1 Distribution Distributions of scores for all three different quality dimensions are shown in Table 3a. Additionally, the distribution of text length in the corpus as well as scatterplots for text length and quality are given. All three dimensions exhibit a similar distribution, centered at zero. Given that the dummy item of the model, i.e., an item that is defined to have exactly one win and one loss against every other item, has a fixed score of 1, this indicates that the majority 5779 (a) (b) Dimension Rhetorical Logical Dialectical Our Expert Our Expert Our Expert Rhetorical 0.63 0.81 0.61 0.75 Logical 0.63 0.81 0.55 0.78 Dialectical 0.61 0.75 0.55 0.78 Pro Con Pro Con Pro Con Rhetorical 0.62 0.63 0.60 0.61 Logical 0.62 0.63 0.60 0.51 Dialectical 0.60 0.61 0.60 0.51 (c) Dimension Overall l > 100 Rhetorical 0.65 0.37 Logical 0.64 0.37 Dialectical 0.63 0.28 (d) Step Variance Rhetorical Logical Dialectical 1 0.73 -0.5866 -0.5715 -0.5738 2 0.15 0.1050 0.6489 -0.7536 3 0.12 -0.8031 0.5023 0.3206 (e) Sample Sizes Overall l > 100 1271 869 Pro Con 675 596 Args. Non-args. 1271 339 Table 3: (a) Distributions and scatterplots for quality scores and text length in the corpus. (b) Pearson ρ correlation coefficient cross-tabulation for different attribute combinations, full set and per stance. Expert values are taken from Wachsmuth et al. (2017), Table 3. Maximum per column in bold. (c) Correlation between quality dimensions and text length, full and only for texts longer than 100 words. (d) Component vectors and explained variance for PCA steps on argument quality. (e) Sample sizes for (b) and (c). of texts in our corpus are only of mediocre argumentative quality. Also, all three distributions are slightly asymmetric, with the lower end extending more. The distribution of text lengths is fairly similar across all lengths, with only one apparent spike around 0-50 words. 5.2 Correlation Table 3b shows correlation coefficients for the three quality dimensions when compared to each other, and when compared to argument stance. The interquality correlation as given by Wachsmuth et al. is also included, and found to be commensurate with their figures. Although the correlation is lower in total, which is expected given the much bigger sample size and different annotation methodology, the general pattern is reproduced, with one slight deviation: dialectical quality appears to correlate slightly more with rhetorical quality in our corpus, but with logical quality in their data. However, given that the two correlation coefficients are nearly equal in the data of Wachsmuth et al., and the value differences between the different quality dimensions in our data are also too small to draw any conclusions regarding whether two of the three are more intertwined than the third, this effect is not problematic. The correlation between the quality dimensions being fairly high hints at them being dependent on a latent variable, which could be the overall argumentation quality. When computing the scores separately for each stance, no systematic difference is apparent. Following the reasoning of Potthast et al. (2019), this may indicate that the scoring method is not prone to assessor bias. A correlation of quality and text length (measured as word count), as also noted by Swanson et al. (2015), is evident (Table 3c). While this could hint at a data bias, with crowd workers just voting for longer texts in the comparison but not actually reading all of it, the effect is much less pronounced when only measuring the correlation in texts longer than 100 words (n = 869). Thus, much of the pronounced effect can be explained by short texts receiving justified low scores rather 5780 than longer texts being voted higher regardless of content. From a qualitative point of view, a correlation effect between length and quality would also be expected, since a solid argumentative reasoning (claim and justification) usually requires at least some amount of text. The scatterplots in Table 3a additionally corroborate the correlation effects: only a very minor trend is apparent, with longer text receiving slightly higher scores. Given the accumulation of texts towards the lower end of the length spectrum, and these receiving lower scores further explains the lower overall correlation when only measuring in texts over 100 words. 5.3 Overall Argument Quality For some applications a scalar value for overall argument quality is warranted. As it has been argued that an overall argument quality is hard to measure, the three different explored quality dimensions could be combined to derive such a rating. The high correlation of the different quality dimensions implies such a latent variable. As a working hypothesis, the overall argument quality could be interpreted as a three-dimensional vector in a space spanned by the three quality dimensions. Based on this, two essential questions have to be explored: (1) Are the different dimensions equally influential on the overall argument quality? (2) How can a scalar quality value for overall quality be derived from such a vector? To address the first question, principal component analysis (PCA) was carried out to measure the influence of each quality dimension on the hypothesized latent variable. Results are given in Table 3d. The first step of the PCA accounts for 73% of the data variance, and is equally influenced by all three quality dimensions. Therefore, evidence is given towards the hypothesis. As for how to derive a numerical value for this overall argument quality, since the influence of all dimensions is equal, the euclidean vector length is proposed. However, since the quality scores derived in this work are positive as well as negative, the length of a vector is the same as that of its negative counterpart. To account for this, the score distributions are equally shifted into the positive domain. Thus, a standardized scalar value for overall argument quality can be calculated. 6 Conclusion A novel approach for annotating argument quality based on stochastic transitivity modeling has been proposed, outperforming existing approaches in terms of annotation effort and annotation detail, while maintaining a high annotation quality. The overall workload in comparison to previous approaches within the same class of approaches was reduced by 93.17% through an efficient sampling method. Sampling at even higher rates is possible, resulting in the new framework operating at the same cost as the traditional approach relying on graded scales. The collected data and a reference implementation of our model are made available in form of the Webis-ArgQuality-20 corpus, one of the largest and most detailed corpora for pairwise argument quality. The collected corpus can be used for a multitude of purposes—especially in the emerging field of argument retrieval, it is suitable as basis for retrieval evaluation, or to train new learning to rank models. A second field of application is debate systems, where a dataset can be of use for training a system to formulate new arguments. The developed annotation approach is also not only limited to rate argument quality: it can easily be transferred to other questions or criteria that can be rated by comparison. Even though the annotation cost can be slightly higher compared to the traditional absolute rating approach, the derived data is much more detailed and allows for conclusions with higher statistical power. Insight into argument quality was derived on a larger scale than in previous studies. It has been shown that the three quality dimensions can be successfully annotated by laymen when using the described annotation procedure. The correlation patterns found in previous studies were reproduced, showing the quality dimensions to be equally correlating with each other. This is likely due to them being dependent on a latent overall quality, a hypothesis that was supported using a PCA analysis of derived quality vectors. A procedure to derive a scalar value for overall quality was introduced, proposing Euclidean vector length to combine the different dimension scores. 5781 References Alan Agresti. 2003. Categorical data analysis, 2 edition. John Wiley & Sons, Hoboken, NY. Ralph Allan Bradley and Milton E. Terry. 1952. Rank analysis of incomplete block designs: The method of paired comparison. Biometrika, 39(3-4):324–345. Michael L Burton. 2003. Too many questions? The uses of incomplete cyclic designs for paired comparisons. Field Methods, 15(2):115–130. Xi Chen, Paul N. Bennett, Kevyn Collins-Thompson, and Eric Horvitz. 2013. Pairwise ranking aggregation in a crowdsourced setting. In Proceedings of the sixth ACM international conference on web search and data mining, WSDM ’13, pages 193–202, New York, NY. ACM. Ivan Habernal and Iryna Gurevych. 2016. Which argument is more convincing? Analyzing and predicting convincingness of web arguments using bidirectional LSTM. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1589– 1599, Berlin, Germany. Association for Computational Linguistics. David R. Hunter. 2004. MM algorithms for generalized Bradley-Terry models. The annals of statistics, 32(1):384–406. Iadh Ounis, Gianni Amati, Vassilis Plachouras, Ben He, Craig Macdonald, and Christina Lioma. 2006. Terrier: A High Performance and Scalable Information Retrieval Platform. In Proceedings of ACM SIGIR’06 Workshop on Open Source Information Retrieval (OSIR 2006). Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation ranking: Bringing order to the web. Thomas Pfeiffer, Xi Alice Gao, Yiling Chen, Andrew Mao, and David G Rand. 2012. Adaptive polling for information aggregation. In Twenty-Sixth AAAI Conference on Artificial Intelligence. Leslie Gross Portney, Mary P Watkins, et al. 2009. Foundations of clinical research: Applications to practice, volume 892. Pearson/Prentice Hall, Upper Saddle River, NJ. Martin Potthast, Lukas Gienapp, Florian Euchner, Nick Heilenkötter, Nico Weidmann, Henning Wachsmuth, Benno Stein, and Matthias Hagen. 2019. Argument Search: Assessing Argument Relevance. In 42nd International ACM Conference on Research and Development in Information Retrieval (SIGIR 2019). ACM. P.V. Rao and Lawrence L. Kupper. 1967. Ties in pairedcomparison experiments: A generalization of the Bradley-Terry model. Journal of the American Statistical Association, 62(317):194–204. Edwin D. Simpson and Iryna Gurevych. 2018. Finding convincing arguments using scalable bayesian preference learning. Trans. Assoc. Comput. Linguistics, 6:357–371. Benno Stein and Henning Wachsmuth, editors. 2019. 6th Workshop on Argument Mining (ArgMining 2019) at ACL. Association for Computational Linguistics, Berlin Heidelberg New York. Reid Swanson, Brian Ecker, and Marilyn Walker. 2015. Argument mining: Extracting arguments from online dialogue. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 217–226, Prague, Czech Republic. Association for Computational Linguistics. Assaf Toledo, Shai Gretz, Edo Cohen-Karlik, Roni Friedman, Elad Venezian, Dan Lahav, Michal Jacovi, Ranit Aharonov, and Noam Slonim. 2019. Automatic argument quality assessment - New datasets and methods. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLPIJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 5624–5634. Association for Computational Linguistics. Henning Wachsmuth, Nona Naderi, Yufang Hou, Yonatan Bilu, Vinodkumar Prabhakaran, Tim Alberdingk Thijm, Graeme Hirst, and Benno Stein. 2017. Computational argumentation quality assessment in natural language. In 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017), pages 176–187. Ting Yan, Jinfeng Xu, and Yaning Yang. 2011. Grouped sparse paired comparisons in the BradleyTerry model. arXiv preprint. Peng Ye and David Doermann. 2013. Combining preference and absolute judgements in a crowd-sourced setting. In ICML Workshop.
2020
511
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5782–5788 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5782 Entity-Aware Dependency-Based Deep Graph Attention Network for Comparative Preference Classification Nianzu Ma†, Sahisnu Mazumder†, Hao Wang♮, Bing Liu† †Department of Computer Science, University of Illinois at Chicago, USA ♮School of Information Science and Technology, Southwest Jiaotong University, China [email protected], [email protected] [email protected], [email protected] Abstract This paper studies the task of comparative preference classification (CPC). Given two entities in a sentence, our goal is to classify whether the first (or the second) entity is preferred over the other or no comparison is expressed at all between the two entities. Existing works either do not learn entity-aware representations well and fail to deal with sentences involving multiple entity pairs or use sequential modeling approaches that are unable to capture long-range dependencies between the entities. Some also use traditional machine learning approaches that do not generalize well. This paper proposes a novel Entityaware Dependency-based Deep Graph Attention Network (ED-GAT) that employs a multihop graph attention over a dependency graph sentence representation to leverage both the semantic information from word embeddings and the syntactic information from the dependency graph to solve the problem. Empirical evaluation shows that the proposed model achieves the state-of-the-art performance in comparative preference classification. 1 Introduction Given a sentence that contains two entities of interest, the task of Comparative Preference Classification is to decide whether there is a comparison between the two entities and if so, which entity is preferred (Jindal and Liu, 2006a; Ganapathibhotla and Liu, 2008; Liu, 2012; Panchenko et al., 2019). For example, considering sentence s1 (shown in Table 1), there is a comparison between the two underlined entities, and “golf” is preferred over “baseball”. This sentence contains explicit comparative predicate “easier”. The task seems straightforward but is quite challenging due to many counterexamples. For example, s2 shows that “better” may not indicate a comparison. s3, another counterexample, shows that “slower” indeed indicates a ID Sentences s1 Golf is easier to pick up than baseball. s2 I’m considering learning Python and more PHP if any of those would be better. s3 The tools based on Perl and Python is much slower under Windows than K9. Table 1: Comparative sentence examples. Entities of interest are underlined in each sentence. comparison, but not between “Perl” and “Python”, but between “tools” and “K9”. Problem statement. Given a sentence s = ⟨w1, w2, ..., e1, ..., e2, ...wn⟩, where e1 and e2 are entities consisting of a single word or a phrase, and e1 appears before e2 in the sentence, our goal is to classify the comparative preference direction between these two entities into one of the three classes: {BETTER, WORSE, NONE}. BETTER (WORSE) means e1 is preferred (not preferred) over e2. NONE means that there is no comparative relation between e1 and e2. Although closely related, Comparative Preference Classification (CPC) is different from Comparative Sentence Identification (CSI), which is a 2-class classification problem that classifies a sentence as a comparative or a non-comparative sentence. In previous work, Jindal and Liu (2006a) did CSI without considering which two entities are involved in a comparison. Tkachenko and Lauw (2015) employed some dependency graph features to approach the CSI task given two entities of interest. In this entity-aware case, syntactic features are crucial. However, not using word embeddings in the model makes the model harder to generalize with a good performance given various ways of expressing comparisons. Panchenko et al. (2019) gave the state-of-the-art result on the CPC task by using a pretrained sentence encoder to produce sentence embeddings as a feature for classification. However, this model is not entity-aware and does not use the dependency graph information. 5783 than Windows under slower much is Python and Perl on based tools The K9 . Figure 1: Dependency graph representation of a comparative sentence For the CPC task, building a model that is entityaware and also explicitly uses the dependency graph information is vital. We explain the reason as follows. For example, the dependency graph information gives a clue that the underlined entities in s2 of Table 1 are not involved in a comparison, although there is a comparative indicator “better” in the sentence. s3 (also refer to Figure 1) has two entity pairs, which make an entity-aware model necessary. The pair of entities, tools and K9, are far away from each other in the sequence. But in the dependency graph, they are just two hops away from each other and one hop away from the key comparative predicate “slower”. For the pair of entities, Perl and Python, although both are sequentially near to the word “slower”, the dependency graph information does not indicate they are involved in a comparison. We see that an entity-aware model can avoid the mistake of taking comparative predicates not associated with the entity pair as an evidence. Also, the dependency graph of a sentence contains important clues that can benefit the comparative preference classification. Methods, which are not entity-aware and do not model dependency structures, are not capable of dealing with the cases in s2 and s3. To address the limitations of the previous models, we propose a novel Entity-aware Dependencybased Deep Graph Attention Network (ED-GAT) for comparative preference classification. We represent a sentence by its dependency graph. This Graph Attention Network (GAT) (Veliˇckovi´c et al., 2018) based model can naturally fuse word semantic information and dependency information within the model. By building a deep graph attention network stacking several self-attention layers, the model can effectively capture long-range dependencies, which is beneficial for identifying the comparison preference direction between two entities. We have applied this model on a real-world benchmark dataset, and the results show that incorporating the dependency graph information greatly helps this task. It outperforms strong and latest baselines, as discussed in the experiments. 2 Proposed Model In this section, we first give a brief introduction to the GAT model. We then present the proposed ED-GAT model and discuss how to apply it to the CPC task. 2.1 Graph Attention Network (GAT) The critical component of our model is the Graph Attention Network (GAT) (Veliˇckovi´c et al., 2018), which fuses the graph-structured information and node features within the model. Its masked selfattention layers allow a node to attend to neighborhood features and learn different attention weights for different neighboring nodes. The node features fed into a GAT layer are X = [x1, x2, ...xi, ...xn], xi ∈RF , where n is the number of nodes, F is the feature size of each node. The attention mechanism of a typical GAT can be summarized by equation (1). hout i = K ∥ k=1 σ  X j∈Ni αk ijW kxj   αk ij = exp(f((ak)T [W kxi ∥W kxj])) P c∈Ni exp(f((ak)T [W kxi ∥W kxc])) (1) Here, given the node feature vectors in GAT, node i attends over its 1-hop neighbors j ∈Ni. ∥ K k=1 denotes the concatenation of K multi-head attention outputs, hout i ∈RF ′ is the output of node i at the current layer, αk ij is the k-th attention between nodes i and j, W k ∈R F ′ K ×F is linear transformation, ak ∈R 2F ′ K is the weight vector, and f(·) is LeakyReLU non-linearity function. Overall, the input-output for a single GAT layer is summarized as Hout = GAT(X, A; Θl). The input is X ∈Rn×F and the output is Hout ∈ Rn×F ′, where n is the number of nodes, F is the node feature size, F ′ is GAT hidden size, and A ∈ Rn×n is the adjacency matrix of the graph. 2.2 ED-GAT for CPC task We use the dependency parser in (Chen and Manning, 2014) to convert a sentence into a dependency parse graph. Each word corresponds to a node in the graph. The node features are the word embedding vectors, denoted as xi ∈RF corresponding to node i. The input node feature matrix is X ∈Rn×F . Note that an entity is either a single word or a multi-word phrase. To treat each entity as 5784 GAT layer 1 GAT layer L ... Output Softmax Figure 2: L layer ED-GAT model one node, we replace the whole entity word/phrase with “EntityA” or “EntityB” before parsing. A multi-word entity embedding is obtained by averaging the embeddings of the words in the entity. We observe that for a given node in the dependency parse graph, both its parents and children contain useful information for the task. To make the ED-GAT model treat both its parents and children as neighbors, we simplify the original directed dependency graph into an undirected graph. The structure of the graph is encoded into an adjacency matrix A ∈Rn×n. ED-GAT does not attend to all neighbors of a given node on an equal basis. The attention weights to the neighbors are automatically learned during training based on their usefulness to the task, regardless of whether they are parents or children in the dependency graph. The higher the attention weight given to a neighbor, the more useful this neighbor is to the task. In a single GAT layer, a word or an entity in a graph only attends over the local information from 1-hop neighbors. To enable the model to capture long-range dependencies, we stack L layers to make a deep model, which allows information from L-hops away to propagate to this word. Our model is thus a deep graph attention network. As illustrated in Figure 2, the stacking architecture is represented as Hl+1 = GAT(Hl, A; Θl), l ≥0, H0 = XW0 + b0. The output of the GAT layer l, Hl out = GAT(Hl, A; Θl), is the input for layer (l + 1), denoted by Hl+1. H0 is the initial input. W0 ∈RF×F ′ and b0 are the projection matrix and bias vector. For a L layer ED-GAT model, the output of the final layer is HL out ∈Rn×F ′. We use a mask layer to fetch the two hidden vectors from HL out, which corresponds to the two entities of interest: (he1, he2) = Masklayer(HL out). Next, we concatenate these two vectors as: v = [he1 ∥he2] and use a feed-forward layer with softmax function to project v into classes for prediction. Here using he1 and he2 makes the ED-GAT model entity-aware as they are the output of the nodes corresponding to entities e1 and e2, each of which attends over its neighbors’ features in L hops in the graph and leverages both the word semantics and dependency structure information in learning. The ED-GAT model is trained by minimizing the standard cross-entropy loss over training examples. 3 Related Works Many papers have been devoted to exploring comparisons in text. For the CSI task, early works include those in (Jindal and Liu, 2006a; Ganapathibhotla and Liu, 2008). More recently, Park and Blake (2012) employed handcrafted syntactic rules to identify comparative sentences in scientific articles. For other languages such as Korean and Chinese, related works include (Huang et al., 2008), (Yang and Ko, 2009) and (Zhang and Jin, 2012). Other works are interested in identifying entities, aspects and comparative predicates in comparative sentences, e.g., (Jindal and Liu, 2006b), (Hou and Li, 2008), (Kessler and Kuhn, 2014), (Kessler and Kuhn, 2013), and (Feldman et al., 2007). Ganapathibhotla and Liu (2008) used lexicon properties to determine the preferred entities given the output of (Jindal and Liu, 2006b), which is quite different from our task. There are also works related to product ranking using comparisons, such as those in (Kurashima et al., 2008), (Zhang et al., 2013), (Tkachenko and Lauw, 2014) and (Li et al., 2011). All these related works solve very different problems in comparison analysis than our CPC task. Works in NLP that use Graph Neural Networks and dependency graph structures include (Huang and Carley, 2019), (Guo et al., 2019). But their tasks and models are different from ours. 4 Experiments 4.1 Dataset We perform experiments using the benchmark CompSent-19 dataset (Panchenko et al., 2019), where each sentence has an entity pair (e1, e2) and its comparative preference label. The original dataset is split into an 80% training set and a 20% test set. During the experiment, we further 5785 Dataset Better Worse None Total Train 872(19%) 379(8%) 3,355(73%) 4,606 Dev 219(19%) 95(8%) 839(73%) 1,153 Test 273(19%) 119(8%) 1,048(73%) 1,440 Total 1,346 593 5,242 7,199 Table 2: Statistics of the CompSent-19 dataset split the original training data by randomly sampling 20% for each label as the development set for model selection. The dataset statistics are given in Table 2. The model is trained only on the newly split training set. We use the class-based F1 score as the evaluation measure. F1(B), F1(W) and F1(N) represent F1 score for classes BETTER, WORSE and NONE respectively. F1-Micro is the average F1 score as in (Panchenko et al., 2019). 4.2 Model Implementation Details The Stanford Neural Network Dependency Parser (Chen and Manning, 2014) is used to build the dependency parse graph for each sentence. In our experiment, we use two pretrained word embeddings: GloVe embeddings (Pennington et al., 2014)1 and BERT embedding (Devlin et al., 2019)2. The input of BERT is formatted as the standard BERT input format, with “[CLS]” before and “[SEP]” after the sentence tokens. For this, we employ the BERT tokenizer to tokenize each word into word pieces (tokens). The output of the pretrainedBERT model is a sequence of embeddings, each of size 768, and corresponds to a word piece. We average the word piece embeddings of the original word to get the embedding for each word (node in the dependency graph). Note that, word embeddings are kept frozen and not fine-tuned by the subsequent model structure. For the ED-GAT model, we set the hidden size as 300. The features of the nodes, which are the word embeddings, are first transformed into vectors of the hidden size and then fed into the ED-GAT model. We use 6 attention heads, training batch size of 32, Adam optimizer (Kingma and Ba, 2014) with learning rate 5e-4, word embedding dropout rate (Srivastava et al., 2014) 0.3 and GAT attention dropout rate 0. The implementation of the model is based on PyTorch Geometric (PyG) (Fey and Lenssen, 2019) and NVIDIA GPU GTX 1080 ti. 1http://nlp.stanford.edu/data/glove.840B.300d.zip 2For all our BERT related experiments, we use the pretrained BERT model: https://storage.googleapis. com/bert_models/2018_10_18/uncased_L-12_ H-768_A-12.zip 4.3 Compared Models We compare models from the previous literature with several variations of our proposed model. Majority-Class assigns the majority label in the training set to each instance in the test set. SentEmbed given in (Panchenko et al., 2019) obtains sentence embeddings from a pretrained Sentence Encoder (Conneau et al., 2017; Bowman et al., 2015). The sentence embedding3 is then fed to XGBoost (Chen and Guestrin, 2016) for classification. For a fair comparison, we also feed the sentence embedding into a linear layer. They are represented as SentEmbedXGBoost and SentEmbedLinear. SVM-Tree4 given in (Tkachenko and Lauw, 2015) uses convolution kernel methods and dependency tree features to approach the CSI task. We use the one-vs-rest technique to adapt this model to our three-class CPC task. WordEmbed-Avg first constructs a sentence embedding by averaging the word embeddings of all words in a sentence, and then feeds it to a linear classifier. Glove-Avg and BERT-Avg, respectively are the methods that use GloVe embeddings from GloVe.840B (Pennington et al., 2014) and static BERT embeddings (Devlin et al., 2019). BERT-FT appends a linear classification layer on the hidden state corresponding to the first token “[CLS]” of the BERT sequence output and then finetunes the pretrained BERT weights on our task. ED-GAT is the proposed model in this paper (Section 2.2). We use both GloVe embeddings and BERT embeddings. We use (L) to represent model variants with different numbers of layers and use the subscript to denote the type of embedding. For example, ED-GATGloVe(8) is the ED-GAT model using GloVe embedding, and the depth of the model is 8 layers. We also add the LSTMBERT baseline, which uses the sequence output of a static BERT model to train an LSTM model. The final hidden vector is used for classification. 4.4 Results and Analysis As we see in Table 3, the state-of-the-art (SOTA) baseline is SentEmbedXGBoost. SentEmbedLinear performs much worse than SentEmbedXGBoost. This result shows that XGBoost classifies sentence embeddings much better than a linear layer. Simply using word embedding average, GloVe-Avg 3https://github.com/facebookresearch/InferSent 4https://github.com/sitfoxfly/tree-svm 5786 Models Micro. F1(B) F1(W) F1(N) Baselines Majority-Class 68.95 0.0 0.0 81.62 SVM-Tree 68.12 53.35 13.90 78.13 SentEmbedLinear 79.31 62.71 37.61 88.42 SentEmbedXGBoost 85.00* 75.00* 43.00* 92.00* Glove-Avg 76.32 48.28 20.12 86.34 BERT-Avg 77.64 53.94 26.88 87.47 LSTMBERT 80.97 63.55 44.02 88.95 BERT-FT 83.12 69.62 50.37 89.84 Proposed Models ED-GATGloVe(8) 83.96 72.58 47.35 90.79 ED-GATGloVe(9) 83.89 72.05 46.45 90.54 ED-GATGloVe(10) 84.24 72.56 50.20 91.19 ED-GATBERT(8) 87.43 78.21 56.14 92.98 ED-GATBERT(9) 86.46 74.40 58.72 92.31 ED-GATBERT(10) 86.18 77.35 53.33 92.23 Table 3: Comparison of baselines and ED-GAT variants. * indicates the result is from the original paper. and BERT-Avg do not perform well. The result of LSTMBERT shows that using BERT embedding sequentially is not suitable for our task. BERT-FT fine-tunes BERT on our task, but its performance is below SOTA. During experiments, we also found that the performance of BERT-FT is unstable. The training process of the model quickly overfits the pretrained BERT weights. For the ED-GAT model, we first tried to train embeddings only on this dataset by randomly initializing word embeddings as input. As expected, the results were significantly poorer than those using the pre-trained embeddings, in part because our training data is very small (see Table 2). As the baselines all use pretrained embeddings, we thus report the results of using pre-trained word embeddings in Table 3. When employing Glove embeddings, surprisingly, ED-GATGloVe(10) performs better than BERT-FT, which is based on a language model pretrained on a huge corpus. We also tried to employ word2vec5 for ED-GAT. It got very similar results to those using the GloVe embeddings. The Micro-F1 scores of using word2vec embeddings for the number of layers 8, 9, and 10 are 83.12, 83.33, and 84.86, respectively. To be concise, we did not include these results in Table 3. Our model also uses the static BERT embedding, which further improves the result. Using static BERT embedding avoids overfitting. On the one hand, it incorporates the rich semantic information with the BERT pretrained weights. On the other hand, ED-GAT’s ability to leverage dependency graph features greatly helps the model in capturing 5GoogleNews-vectors-negative300.bin.gz (https:// code.google.com/archive/p/word2vec/) Figure 3: Effects of the number of layers in ED-GAT the comparison between the entities and classifying the preference direction. Our ED-GATBERT(8) reports the new state-of-the-art results for CPC task considering F1-Micro and all class-wise F1. Effects of Model Depth. From Figure 3, we see that increasing the number of stacked layers improves the performance of the model. For EDGATGloVe, as GloVe does not contain the context information, the GAT structure based on the dependency graph greatly improves the result. Even the 2-layer model achieves a good result. ED-GATBERT does not have the same effect because the BERT embedding already contains rich semantic information. But still, when the number of layers increases, ED-GATBERT becomes more powerful as it captures longer range dependencies. 5 Conclusion This paper proposes a novel model called ED-GAT for Comparative Preference Classification. It naturally leverages dependency graph features and word embeddings to capture the comparison and to classify the preference direction between two given entities. Experimental results show that it outperforms all strong baselines and even BERT pretrained using a huge corpus. Our future work aims to improve the CPC performance further. Apart from that, we also plan to design novel models to perform the related tasks of entity extraction and aspect extraction from comparative sentences. Performing all these tasks jointly in a multitask learning framework is a promising direction as well because it can exploit the shared features and the inherent relationships of these tasks to perform all tasks better. Acknowledgments This work was supported in part by two grants from National Science Foundation: IIS-1910424 and IIS-1838770, and a research gift from Northrop Grumman. 5787 References Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP, pages 632–642. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In EMNLP, pages 740–750. Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In SIGKDD, pages 785–794. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In EMNLP, pages 670–680. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, pages 4171–4186. Ronen Feldman, Moshe Fresco, Jacob Goldenberg, Oded Netzer, and Lyle Ungar. 2007. Extracting product comparisons from discussion boards. In ICDM, pages 469–474. Matthias Fey and Jan E. Lenssen. 2019. Fast graph representation learning with PyTorch Geometric. In ICLR Workshop on Representation Learning on Graphs and Manifolds. Murthy Ganapathibhotla and Bing Liu. 2008. Mining opinions in comparative sentences. In Proceedings of the 22nd International Conference on Computational Linguistics, pages 241–248. Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention guided graph convolutional networks for relation extraction. In ACL, pages 241–251. Feng Hou and Guo-hui Li. 2008. Mining chinese comparative sentences by semantic role labeling. In Proceedings of International Conference on Machine Learning and Cybernetics, pages 2563–2568. Binxuan Huang and Kathleen Carley. 2019. Syntaxaware aspect level sentiment classification with graph attention networks. In EMNLP-IJCNLP, pages 5468–5476. Xiaojiang Huang, Xiaojun Wan, Jianwu Yang, and Jianguo Xiao. 2008. Learning to identify comparative sentences in chinese text. In Proceedings of Pacific Rim International Conference on Artificial Intelligence, pages 187–198. Nitin Jindal and Bing Liu. 2006a. Identifying comparative sentences in text documents. In SIGIR, pages 244–251. Nitin Jindal and Bing Liu. 2006b. Mining comparative sentences and relations. In AAAI, pages 1331–1336. Wiltrud Kessler and Jonas Kuhn. 2013. Detection of product comparisons-how far does an out-of-the-box semantic role labeling system take you? In EMNLP, pages 1892–1897. Wiltrud Kessler and Jonas Kuhn. 2014. Detecting comparative sentiment expressions - a case study in annotation design decisions. In Proceedings of the 12th edition of the KONVENS conference, pages 165– 170. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Takeshi Kurashima, Katsuji Bessho, Hiroyuki Toda, Toshio Uchiyama, and Ryoji Kataoka. 2008. Ranking entities using comparative relations. In Proceedings of International Conference on Database and Expert Systems Applications, pages 124–133. Si Li, Z-J Zha, Zhaoyan Ming, Meng Wang, T-S Chua, Jun Guo, and Weiran Xu. 2011. Product comparison using comparative relations. In SIGIR, pages 1151– 1152. Bing Liu. 2012. Sentiment analysis and opinion mining. Morgan & Claypool Publishers. Alexander Panchenko, Alexander Bondarenko, Mirco Franzek, Matthias Hagen, and C Biemann. 2019. Categorizing comparative sentences. In Workshop on Argument Mining@ACL, pages 136–145. Dae Hoon Park and Catherine Blake. 2012. Identifying comparative claim sentences in full-text scientific articles. In Workshop on detecting structure in scholarly discourse, pages 1–9. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958. Maksim Tkachenko and Hady Lauw. 2015. A convolution kernel approach to identifying comparisons in text. In ACL-IJCNLP, pages 376–386. Maksim Tkachenko and Hady W Lauw. 2014. Generative modeling of entity comparisons in text. In CIKM, pages 859–868. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2018. Graph Attention Networks. In ICLR. Seon Yang and Youngjoong Ko. 2009. Extracting comparative sentences from korean text documents using comparative lexical patterns and machine learning techniques. In ACL-IJCNLP, pages 153–156. 5788 Runxiang Zhang and Yaohong Jin. 2012. Identification and transformation of comparative sentences in patent chinese-english machine translation. In Proceedings of International Conference on Asian Language Processing, pages 217–220. Zhu Zhang, Chenhui Guo, and Paulo Goes. 2013. Product comparison networks for competitive analysis of online word-of-mouth. ACM Transactions on Management Information Systems (TMIS), pages 20:1– 20:22.
2020
512
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5789–5798 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5789 OPINIONDIGEST: A Simple Framework for Opinion Summarization Yoshihiko Suhara∗1 Xiaolan Wang∗1 Stefanos Angelidis2 Wang-Chiew Tan1 1Megagon Labs 2University of Edinburgh {yoshi,xiaolan,wangchiew}@megagon.ai [email protected] Abstract We present OPINIONDIGEST, an abstractive opinion summarization framework, which does not rely on gold-standard summaries for training. The framework uses an Aspect-based Sentiment Analysis model to extract opinion phrases from reviews, and trains a Transformer model to reconstruct the original reviews from these extractions. At summarization time, we merge extractions from multiple reviews and select the most popular ones. The selected opinions are used as input to the trained Transformer model, which verbalizes them into an opinion summary. OPINIONDIGEST can also generate customized summaries, tailored to specific user needs, by filtering the selected opinions according to their aspect and/or sentiment. Automatic evaluation on YELP data shows that our framework outperforms competitive baselines. Human studies on two corpora verify that OPINIONDIGEST produces informative summaries and shows promising customization capabilities1. 1 Introduction The summarization of opinions in customer reviews has received significant attention in the Data Mining and Natural Language Processing communities. Early efforts (Hu and Liu, 2004a) focused on producing structured summaries which numerically aggregate the customers’ satisfaction about an item across multiple aspects, and often included representative review sentences as evidence. Considerable research has recently shifted towards textual opinion summaries, fueled by the increasing success of neural summarization methods (Cheng and Lapata, 2016; Paulus et al., 2018; See et al., 2017; Liu and Lapata, 2019; Isonuma et al., 2019). ∗Equal contribution. 1Our code is available at https://github.com/ megagonlabs/opiniondigest. Opinion summaries can be extractive, i.e., created by selecting a subset of salient sentences from the input reviews, or abstractive, where summaries are generated from scratch. Extractive approaches produce well-formed text, but selecting the sentences which approximate the most popular opinions in the input is challenging. Angelidis and Lapata (2018) used sentiment and aspect predictions as a proxy for identifying opinion-rich segments. Abstractive methods (Chu and Liu, 2019; Braˇzinskas et al., 2019), like the one presented in this paper, attempt to model the prevalent opinions in the input and generate text that articulates them. Opinion summarization can rarely rely on goldstandard summaries for training (see Amplayo and Lapata (2019) for a supervised approach). Recent work has utilized end-to-end unsupervised architectures, based on auto-encoders (Chu and Liu, 2019; Braˇzinskas et al., 2019), where an aggregated representation of the input reviews is fed to a decoder, trained via reconstruction loss to produce reviewlike summaries. Similarly to their work, we assume that review-like generation is appropriate for opinion summarization. However, we explicitly deal with opinion popularity, which we believe is crucial for multi-review opinion summarization. Additionally, our work is novel in its ability to explicitly control the sentiment and aspects of selected opinions. The aggregation of input reviews is no longer treated as a black box, thus allowing for controllable summarization. Specifically, we take a step towards more interpretable and controllable opinion aggregation, as we replace the end-to-end architectures of previous work with a pipeline framework. Our method has three components: a) a pre-trained opinion extractor, which identifies opinion phrases in reviews; b) a simple and controllable opinion selector, which merges, ranks, and –optionally– filters the extracted opinions; and c) a generator model, which is trained 5790 Good location close to the wharf, aquatic park and the many other attraction. Loud fridge and AC.  good location close to wharf close to aquatic park close to attraction loud fridge loud AC 1. Opinion Extraction Good location close to the wharf, aquatic park and the many other attraction. Loud fridge and AC.  2. Opinion Selection good location close to aquatic park loud fridge great location perfect location near the aquarium loud appliances good location near the aquarium loud appliances Multiple original reviews 3. Summary Generation Transformer Original review: Extracted opinion phrases: Opinion clusters: Selected opinions: Good location, Close to Fisherman's Wharf and other attractions. Friendly and nice staff. Noisy room with paper thin walls. Reconstructed review: good location near the aquarium loud appliances Selected opinions: Good location, near the aquarium. The appliances were quite loud. Generated summary:   good location close to wharf close to aquatic park close to attraction loud fridge loud AC Extracted opinion phrases: a. Training via Reconstruction b. Summarization Figure 1: Overview of the OPINIONDIGEST framework. to reconstruct reviews from their extracted opinion phrases and can then generate opinion summaries based on the selected opinions. We describe our framework in Section 2 and present two types of experiments in Section 3: A quantitative comparison against established summarization techniques on the YELP summarization corpus (Chu and Liu, 2019); and two user studies, validating the automatic results and our method’s ability for controllable summarization. 2 OPINIONDIGEST Framework Let D denote a dataset of customer reviews on individual entities {e1, e2, . . . , e|D|} from a single domain, e.g., restaurants or hotels. For every entity e, we define a review set Re = {ri}|Re| i=1 , where each review is a sequence of words r = (w1, . . . , wn). Within a review, we define a single opinion phrase, o = (wo1, . . . wom), as a subsequence of tokens that expresses the attitude of the reviewer towards a specific aspect of the entity2. Formally, we define the opinion set of r as Or = {(oi, poli, ai)}|Or| i=1 , where poli is the sentiment polarity of the i-th phrase (positive, neutral, or negative) and ai is the aspect category it discusses (e.g., a hotel’s service, or cleanliness). For each entity e, our task is to abstractively generate a summary se of the most salient opinions expressed in reviews Re. Contrary to previous abstractive methods (Chu and Liu, 2019; Braˇzinskas et al., 2019), which never explicitly deal with opinion phrases, we put the opinion sets of reviews at the core of our framework, as described in the following sections and illustrated in Figure 1. 2Words that form an opinion may not be contiguous in the review. Additionally, a word can be part of multiple opinions. 2.1 Opinion Extraction Extracting opinion phrases from reviews has been studied for years under the Aspect-based Sentiment Analysis (ABSA) task (Hu and Liu, 2004b; Luo et al., 2019; Dai and Song, 2019; Li et al., 2019). We follow existing approaches to obtain an opinion set Or for every review in our corpus3. Specifically, we used a pre-trained tagging model (Miao et al., 2020) to extract opinion phrases, their polarity, and aspect categories. Step 1 (top-left) of Figure 1 shows a set of opinions extracted from a full review. 2.2 Opinion Selection Given the set or reviews Re = {r1, r2, . . . } for an entity e, we define the entity’s opinion set as Oe = {Or1∪Or2∪. . . }. Summarizing the opinions about entity e relies on selecting the most salient opinions Se ⊂Oe. As a departure from previous work, we explicitly select the opinion phrases that will form the basis for summarization, in the following steps. Opinion Merging: To avoid selecting redundant opinions in Se, we apply a greedy algorithm to merge similar opinions into clusters C = {C1, C2, ...}: given an opinion set Oe, we start with an empty C, and iterate through every opinion in Oe. For each opinion, (oi, poli, ai), we further iterate through every existing cluster in random order. The opinion is added to the first cluster C which satisfies the following criterion, or to a newly created cluster otherwise: ∀(oj, polj, aj) ∈C, cos(vi, vj) ≥θ, 3Our framework is flexible with respect to the choice of opinion extraction models. 5791 where vi and vj are the average word embedding of opinion phrase oi and oj respectively, cos(·, ·) is the cosine similarity, and θ ∈(0, 1] is a hyperparameter. For each opinion cluster {C1, C2, . . . }, we define its representative opinion Repr(Ci), which is the opinion phrase closest to its centroid. Opinion Ranking: We assume that larger clusters contain opinions which are popular among reviews and, therefore, should have higher priority to be included in Se. We use the representative opinions of the top-k largest clusters, as selected opinions Se. The Opinion Merging and Ranking steps are demonstrated in Step 2 (bottom-left) of Figure 1, where the top-3 opinion clusters are shown and their representative opinions are selected. Opinion Filtering (optional): We can further control the selection by filtering opinions based on their predicted aspect category or sentiment polarity. For example, we may only allow opinions where ai = “cleanliness”. 2.3 Summary Generation Our goal is to generate a natural language summary which articulates Se, the set of selected opinions. To achieve this, we need a natural language generation (NLG) model which takes a set of opinion phrases as input and produces a fluent, review-like summary as output. Because we cannot rely on gold-standard summaries for training, we train an NLG model that encodes the extracted opinion phrases of a single review and then attempts to reconstruct the review’s full text. Then, the trained model can be used to generate summaries. Training via Review Reconstruction: Having extracted Or for every review r in a corpus, we construct training examples {T(Or), r}, where T(Or) is a textualization of the review’s opinion set, where all opinion phrases are concatenated in their original order, using a special token [SEP]. For example: Or = {very comfy bed, clean bath} T(Or) = “very comfy bed [SEP] clean bath” The {T(Or), r} pairs are used to train a Transformer model (Vaswani et al., 2017)4 to reconstruct review text from extracted opinions, as shown in Step 3a (top-right) of Figure 1. 4Our framework is flexible w.r.t. the choice of the model. Using a pre-trained language model is part of future work. Method R1 R2 RL Best Review 27.97 3.46 15.29 Worst Review 16.91 1.66 11.11 LexRank 24.62 3.03 14.43 MeanSum 27.86 3.95 16.56 OPINIONDIGEST 29.30 5.77 18.56 Table 1: Summarization results on YELP with ROUGE. Summarization: At summarization time, we use the textualization of the selected opinions, T(Se), as input to the trained Transformer, which generates a natural language summary se as output (Figure 1, Step 3b). We order the selected opinions by frequency (i.e., their respective cluster’s size), but any desired ordering may be used. 3 Evaluation 3.1 Datasets We used two review datasets for evaluation. The public YELP corpus of restaurant reviews, previously used by Chu and Liu (2019). We used a different snapshot of the data, filtered to the same specifications as the original paper, resulting in 624K training reviews. We used the same goldstandard summaries for 200 restaurants as used in Chu and Liu (2019). We also used HOTEL, a private hotel review dataset that consists of 688K reviews for 284 hotels collected from multiple hotel booking websites. There are no gold-standard summaries for this dataset, so systems were evaluated by humans. 3.2 Baselines LexRank (Erkan and Radev, 2004): A popular unsupervised extractive summarization method. It selects sentences based on centrality scores calculated on a graph-based sentence similarity. MeanSum (Chu and Liu, 2019): An unsupervised multi-document abstractive summarizer that minimizes a combination of reconstruction and vector similarity losses. We only applied MeanSum to YELP, due to its requirement for a pre-trained language model, which was not available for HOTEL. Best Review / Worst Review (Chu and Liu, 2019): A single review that has the highest/lowest average word overlap with the input reviews. 3.3 Experimental Settings For opinion extraction, the ABSA models are trained with 1.3K labeled review sentences for YELP and 2.4K for HOTEL. For opinion merging, we used pre-trained word embeddings 5792 Method I-score C-score R-score LexRank -35.4 -32.1 -13.5 MeanSum 14.2 4.9 9.0 OPINIONDIGEST 21.2 27.2 4.4 (a) YELP Method I-score C-score R-score LexRank -5.8 -3.2 -0.5 Best Review -4.0 -10.7 17.0 OPINIONDIGEST 9.8 13.8 -16.5 (b) HOTEL Table 2: Best-Worst Scaling human evaluation. Fully (↑) Partially (↑) No (↓) MeanSum 23.25 % 42.57 % 34.18 % OPINIONDIGEST 29.77 % 47.91 % 22.32 % Table 3: Human evaluation results on content support. (glove.6B.300d), θ = 0.8, and selected the top-k (k = 15) most popular opinion clusters. We trained a Transformer with the original architecture (Vaswani et al., 2017). We used SGD with an initial learning rate of 0.1, a momentum of β = 0.1, and a decay of γ = 0.1 for 5 epochs with a batch size of 8. For decoding, we used Beam Search with a beam size of 5, a length penalty of 0.6, 3-gram blocking (Paulus et al., 2018), and a maximum generation length of 60. We tuned hyperparameters on the dev set, and our system appears robust to their setting (see Appendix A). We performed automatic evaluation on the YELP dataset with ROUGE-1 (R1), ROUGE-2 (R2), and ROUGE-L (RL) (Lin, 2004) scores based on the 200 reference summaries (Chu and Liu, 2019). We also conducted user studies on both YELP and HOTEL datasets to further understand the performance of different models. 3.4 Results Automatic Evaluation: Table 1 shows the automatic evaluation scores for our model and the baselines on YELP dataset. As shown, our framework outperforms all baseline approaches. Although OPINIONDIGEST is not a fully unsupervised framework, labeled data is only required by the opinion extractor and is easier to acquire than gold-standard summaries: on YELP dataset, the opinion extraction models are trained on a publicly available ABSA dataset (Wang et al., 2017). Human Evaluation: We conducted three user studies to evaluate the quality of the generated summaries (more details in Appendix B). First, we generated summaries from 3 systems (ours, LexRank and MeanSum/Best Review) for every entity in YELP’s summarization test set and 200 Does the summary discuss the specified aspect: Exclusively Partially Not HOTEL 46.63 % 43.09 % 10.28 % Table 4: User study on aspect-specific summaries. random entities in the HOTEL dataset, and asked judges to indicate the best and worst summary according to three criteria: informativeness (I), coherence (C), and non-redundancy (R). The systems’ scores were computed using Best-Worst Scaling (Louviere et al., 2015), with values ranging from -100 (unanimously worst) to +100 (unanimously best.) We aggregated users’ responses and present the results in Table 2(a). As shown, summaries generated by OPINIONDIGEST achieve the best informativeness and coherence scores compared to the baselines. However, OPINIONDIGEST may still generate redundant phrases in the summary. Second, we performed a summary content support study. Judges were given 8 input reviews from YELP, and a corresponding summary produced either by MeanSum or by our system. For each summary sentence, they were asked to evaluate the extent to which its content was supported by the input reviews. Table 3 shows the proportion of summary sentences that were fully, partially, or not supported for each system. OPINIONDIGEST produced significantly more sentences with full or partial support, and fewer sentences without any support. Finally, we evaluated our framework’s ability to generate controllable output. We produced aspectspecific summaries using our HOTEL dataset, and asked participants to judge if the summaries discussed the specified aspect exclusively, partially, or not at all. Table 4 shows that in 46.6% of the summaries exclusively summarized a specified aspect, while only 10.3% of the summaries failed to contain the aspect completely. Example Output: Example summaries in Table 5 further demonstrate that a) OPINIONDIGEST is able to generate abstractive summaries from more than a hundred of reviews and b) produce controllable summaries by enabling opinion filtering. The first two examples in Table 5 show summaries that are generated from 8 and 128 reviews of the same hotel. OPINIONDIGEST performs robustly even for a large number of reviews. Since our framework is not based on aggregating review representations, the quality of generated text is not affected by the number of inputs and may result in better-informed summaries. This is a significant difference to previous work (Chu and Liu, 2019; 5793 Asp/Pol/N Input opinions Summary All/All/8 central location [SEP] lovely hotel [SEP] recommend room [SEP] good breakfast [SEP] very nice location [SEP] very dedicated staff [SEP] walking distance to coffee shops [SEP] perfect hotel [SEP] small bathroom [SEP] unkind personnel This hotel is in a perfect location, walking distance to a lot of shops and restaurants. The room was small but it was clean and comfortable. The staff were very nice and helpful. The breakfast was good and the location was very central. I would recommend this hotel and would stay there again if visiting San Francisco. All/All/128 great location [SEP] very friendly staff [SEP] great place [SEP] walkable block from union square [SEP] very clean room [SEP] really soft bed [SEP] quite spacious room [SEP] comfortable view [SEP] nice rooms [SEP] Only 20 minutes from airport The staff were very friendly and helpful. The rooms were quite spacious. The bed was very comfortable and the room was clean. The location of the hotel is great - only a few minutes from union square. The room was really nice and the view was great. All/Positive/8 pretty good food [SEP] good chicken curry [SEP] decent indian food [SEP] little hole in the [SEP] good variety of spices [SEP] inexpensive food [SEP] yummy chicken tikka masala [SEP] good indian place [SEP] stale food This is a good little indian food place. The food is good and the chicken tikka masala is pretty good. They have a good variety of indian food.The chicken biryani is good but the chicken is a little dry. All/Negative/8 worst place [SEP] rude rude rude employees [SEP] missing napkins [SEP] always rude order [SEP] forgot silverware [SEP] cold food [SEP] dirty place [SEP] cold boneless wings [SEP] sucked service [SEP] blah waitress [SEP] horrible service [SEP] mediocre food This place is disgusting. The food is mediocre at best. The service is always rude and the place is dirty and the employees are rude and rude. The food has been cold and the service has been horrible. The worst wings I’ve ever had in my entire life. The place has gone downhill since the last two times. Staff/All/8 sweetest staff [SEP] super service [SEP] great service [SEP] incredibly polite and professional staff I have been coming here for years and have never had a bad experience. The staff is super professional and the service is great. The staff are super professional as well. I would highly recommend this place to anyone. Food/All/8 good food [SEP] great chinese food [SEP] fresh food [SEP] favorite orange chicken [SEP] like shrimp fried rice [SEP] good portions [SEP] best chinese food [SEP] were really shrimp vegetable [SEP] best orange beef [SEP] really good though flavor This is my favorite Chinese food in the area. The food is really good and the portions are great. I really like the orange chicken and the crab puffs are the best I’ve had in a long time. The food here is really good. The shrimp fried rice is really good, and the rice is the best. Table 5: Example summaries on HOTEL (first two) and YELP (last four). Input opinions were filtered by the aspect categories (Asp), sentiment polarity (Pol), and # of reviews (N). Colors show the alignments between opinions and summaries. Italic denotes incorrect extraction. Underlined opinions do not explicitly appear in the summaries. Braˇzinskas et al., 2019), where averaging vectors of many reviews may hinder performance. Finally, we provide qualitative analysis of the controllable summarization abilities of OPINIONDIGEST, which are enabled by input opinion filtering. As discussed in Section 2.2, we filtered input opinions based on predicted aspect categories and sentiment polarity. The examples of controlled summaries (last 4 rows of Table 5) show that OPINIONDIGEST can generate aspect/sentiment-specific summaries. These examples have redundant opinions and incorrect extractions in the input, but OPINIONDIGEST is able to convert the input opinions into natural summaries. Based on OPINIONDIGEST, we have built an online demo (Wang et al., 2020)5 that allows users to customize the generated summary by specifying search terms. 5http://extremereader.megagon.info/ 4 Conclusion We described OPINIONDIGEST, a simple yet powerful framework for abstractive opinion summarization. OPINIONDIGEST is a combination of existing ABSA and seq2seq models and does not require any gold-standard summaries for training. Our experiments on the YELP dataset showed that OPINIONDIGEST outperforms baseline methods, including a state-of-the-art unsupervised abstractive summarization technique. Our user study and qualitative analysis confirmed that our method can generate controllable high-quality summaries, and can summarize large numbers of input reviews. Acknowledgements We thank Hayate Iso for helping debug the code. We also thank Prof. Mirella Lapata for helpful comments as well as the anonymous reviewers for their constructive feedback. 5794 References Reinald Kim Amplayo and Mirella Lapata. 2019. Informative and controllable opinion summarization. arXiv preprint arXiv:1909.02322. Stefanos Angelidis and Mirella Lapata. 2018. Summarizing opinions: Aspect extraction meets sentiment prediction and they are both weakly supervised. In Proc. EMNLP, pages 3675–3686. Arthur Braˇzinskas, Mirella Lapata, and Ivan Titov. 2019. Unsupervised multi-document opinion summarization as copycat-review generation. arXiv preprint arXiv:1911.02247. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proc. ACL ’16, pages 484–494, Berlin, Germany. Association for Computational Linguistics. Eric Chu and Peter J. Liu. 2019. MeanSum: A neural model for unsupervised multi-document abstractive summarization. In Proc. ICML ’19, pages 1223– 1232. Hongliang Dai and Yangqiu Song. 2019. Neural aspect and opinion term extraction with mined rules as weak supervision. In Proc. ACL ’19, pages 5268– 5277. G¨unes Erkan and Dragomir R. Radev. 2004. LexRank: Graph-based lexical centrality as salience in text summarization. J. Artif. Intell. Res., 22:457–479. Minqing Hu and Bing Liu. 2004a. Mining and summarizing customer reviews. In Proc. KDD ’04, pages 168–177. Minqing Hu and Bing Liu. 2004b. Mining opinion features in customer reviews. In Proc. AAAI, volume 4, pages 755–760. Masaru Isonuma, Junichiro Mori, and Ichiro Sakata. 2019. Unsupervised neural single-document summarization of reviews via learning latent discourse structure and its ranking. In Proc. ACL ’19, pages 2142–2152. Svetlana Kiritchenko and Saif M. Mohammad. 2016. Capturing reliable fine-grained sentiment associations by crowdsourcing and best–worst scaling. In Proc. NAACL-HLT ’16, pages 811–817. Yuliang Li, Aaron Feng, Jinfeng Li, Saran Mumick, Alon Halevy, Vivian Li, and Wang-Chiew Tan. 2019. Subjective databases. Proc. VLDB Endow., 12(11):1330–1343. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Proc. ACL Workshop on Text Summarization Branches Out, pages 74–81. Yang Liu and Mirella Lapata. 2019. Hierarchical transformers for multi-document summarization. In Proc. ACL ’19, pages 5070–5081. Jordan J Louviere, Terry N Flynn, and Anthony Alfred John Marley. 2015. Best-worst scaling: Theory, methods and applications. Cambridge University Press. Huaishao Luo, Tianrui Li, Bing Liu, and Junbo Zhang. 2019. DOER: Dual cross-shared RNN for aspect term-polarity co-extraction. In Proc. ACL ’19, pages 591–601. Zhengjie Miao, Yuliang Li, Xiaolan Wang, and WangChiew Tan. 2020. Snippext: Semi-supervised opinion mining with augmented data. In Proc. WWW ’20. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In Proc. ICLR ’18. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get To The Point: Summarization with pointer-generator networks. In Proc. ACL ’17, pages 1073–1083. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. NIPS ’17, pages 5998–6008. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2017. Coupled multi-layer attentions for co-extraction of aspect and opinion terms. In Proc. AAAI ’17. Xiaolan Wang, Yoshihiko Suhara, Natalie Nuno, Yuliang Li, Jinfeng Li, Nofar Carmeli, Stefanos Angelidis, Eser Kandogan, and Wang-Chiew Tan. 2020. ExtremeReader: An interactive explorer for customizable and explainable review summarization. In Companion Proc. WWW ’20, page 176–180. 5795 A Hyper-parameter Sensitivity Analysis We present OPINIONDIGEST’s hyper-parameters and their default settings in Table 6. Among these hyper-parameters, we found that the performance of OPINIONDIGEST is relatively sensitive to the following hyper-parameters: top-k opinion (k), merging threshold (θ), and maximum token length (L). To better understand OPINIONDIGEST’s performance, we conducted additional sensitivity analysis of these three hyper-parameters. The results are shown in Figure 2. Top-k opinion vs Merging threshold: We tested different k = {10, 11, . . . , 20, 30} and θ = {0.6, 0.7, 0.8, 0.9}. The mean (std) of R1, R2, and RL scores were 29.2 (±0.3), 5.6 (±0.2), and 18.5 (±0.2) respectively. Top-k opinion vs Maximum token length: We tested different k = {10, 11, . . . , 20, 30} and T = {40, 50, . . . , 200}. The mean (std) of R1, R2, and RL scores were 29.2 (±0.4), 5.6 (±0.3), and 18.5 (±0.2) respectively. The results demonstrate that OPINIONDIGEST is robust to the choice of the hyper-parameters and constantly outperforms the best-performing baseline method. B Human Evaluation Setup We conducted user study via crowdsourcing using the FigureEight6 platform. To ensure the quality of annotators, we used a dedicated expert-worker pool provided by FigureEight. We present the detailed setup of our user studies as follows. Best-Worst Scaling Task: For each entity in the YELP and HOTEL datasets, we presented 8 input reviews and 3 automatically generated summaries to human annotators (Figure 3). The methods that generated those summaries were hidden from the annotators and the order of the summaries were shuffled for every entity. We further asked the annotators to select the best and worst summaries w.r.t. the following criteria: • Informativeness: How much useful information about the business does the summary provide? You need to skim through the original reviews to answer this. • Coherence: How coherent and easy to read is the summary? 6https://www.figure-eight.com/ Opinion Merging: Word embedding glove.6B.300d Top-k opinion (k) 15 Merging threshold (θ) 0.8 Transformer model training: SGD learning rate 0.1 Momentum (β) 0.1 Decay factor (γ) 0.1 Number of epochs 5 Training batch size 8 Decoding algorithm: Beam size 5 Length penalty 0.6 n-gram blocking (n) 3 Maximum token length (L) 60 Table 6: List of OPINIONDIGEST hyper-parameters and the default settings. • Non-redanduncy: Is the summary successful at avoiding redundant and repeated opinions? To evaluate the quality of the summaries for each criteria, we counted the number of best/worst votes for every system and computed the score as the Best-Worst Scaling (Louviere et al., 2015) : score = |Votebest| −|Voteworst| |Votesall| . The Best-Worst Scaling is known to be more robust for NLP annotation tasks and requires less annotations than rating-scale methods (Kiritchenko and Mohammad, 2016). We collected responses from 3 human annotators for each question and computed the scores w.r.t. informativeness (I-score), coherence (C-score), and non-redundancy (R-score) accordingly. Content Support Task: For the content support study, we presented the 8 input reviews to the annotators and an opinion summary produced from these reviews by one of the competing methods (ours or MeanSum). We asked the annotators to determine for every summary sentence, whether it is fully supported, partially supported, or not supported by the input reviews (Figure 4). We collected 3 responses per review sentence and calculated the ratio of responses for each category. Aspect-Specific Summary Task: Finally, we studied the performance of OPINIONDIGEST in terms of its ability to generate controllable output. We presented the summaries to human judges and asked them to judge whether the summaries discussed the specific aspect exclusively, partially, or not at all (Figure 5). We again collected 3 responses 5796 6 7 8 9 30 20 19 18 17 16 15 14 13 12 11 10 k 29.66 29.64 29.67 29.32 29.69 29.43 29.24 29.03 29.29 29.58 29.17 29.00 29.51 29.55 29.25 29.29 29.04 29.60 28.65 28.68 28.91 29.43 28.97 29.08 28.93 29.14 29.30 28.73 29.28 29.26 29.77 28.58 29.49 29.29 29.12 29.19 29.38 28.95 28.92 29.85 29.10 29.35 28.77 29.31 28.90 28.91 28.42 28.77 (a) ROUGE-1 6 7 8 9 30 20 19 18 17 16 15 14 13 12 11 10 k 6.14 5.75 6.03 5.87 5.60 5.89 5.87 5.57 5.58 5.71 5.74 5.68 5.45 6.00 6.02 5.95 5.35 5.96 5.25 5.64 5.64 5.90 5.70 5.50 5.26 5.77 5.77 5.44 5.63 5.76 6.03 5.52 5.68 5.52 5.77 5.73 5.48 5.31 5.45 5.68 5.28 5.53 5.32 5.34 5.11 5.53 5.24 5.28 (b) ROUGE-2 6 7 8 9 30 20 19 18 17 16 15 14 13 12 11 10 k 18.84 18.76 18.85 18.72 18.74 18.64 18.89 18.60 18.46 18.62 18.87 18.38 18.23 18.61 18.83 18.98 18.28 18.67 18.14 18.41 18.14 18.82 18.74 18.28 18.33 18.44 18.56 18.53 18.55 18.67 18.94 18.35 18.28 18.44 18.59 18.46 18.73 18.40 18.28 18.84 18.48 18.78 18.31 18.12 18.32 18.16 18.29 18.64 (c) ROUGE-L 40 50 60 70 80 90 100 150 200 L 30 20 19 18 17 16 15 14 13 12 11 10 k 29.47 29.79 29.66 29.54 30.02 29.76 29.39 29.16 29.88 28.73 29.18 29.69 29.57 29.59 29.53 29.24 29.83 29.38 28.93 29.30 29.29 30.02 29.34 30.17 28.98 30.02 29.77 28.49 28.79 29.51 29.05 29.47 29.06 29.14 29.07 29.06 28.63 29.11 29.04 28.53 28.91 28.79 29.23 29.15 29.16 28.77 28.64 28.91 29.06 29.52 29.26 29.32 28.97 29.23 28.17 28.97 28.93 28.94 29.10 29.13 29.23 29.61 29.49 28.71 28.93 29.28 29.22 29.39 29.21 29.05 29.29 29.25 28.42 29.00 29.49 29.20 29.35 29.16 28.97 29.07 28.83 28.53 29.38 29.38 29.53 29.35 28.95 29.84 29.77 29.31 28.44 29.09 29.10 28.98 28.88 28.81 28.55 29.59 28.95 28.52 28.51 28.90 28.60 28.99 28.86 28.57 28.24 28.55 (d) ROUGE-1 40 50 60 70 80 90 100 150 200 L 30 20 19 18 17 16 15 14 13 12 11 10 k 5.77 5.63 6.14 6.13 6.18 5.86 5.60 5.80 5.76 5.08 5.54 5.60 5.87 5.81 5.87 5.68 5.67 5.57 5.51 5.69 5.58 6.06 5.53 5.81 5.47 6.30 6.15 5.41 5.65 5.45 5.36 5.47 5.44 5.46 5.32 5.42 5.42 5.27 5.35 5.33 5.57 5.41 5.31 5.60 5.52 5.40 5.41 5.64 5.49 5.93 5.91 5.58 5.72 5.57 5.12 5.41 5.26 5.42 5.61 5.44 5.56 5.75 5.37 5.53 5.58 5.63 5.82 5.71 5.54 5.91 5.54 5.84 5.40 5.50 5.68 5.85 5.51 5.66 5.54 5.68 5.59 5.34 5.62 5.48 6.03 5.75 5.70 5.85 5.80 5.66 5.23 5.25 5.28 5.31 5.32 5.19 5.24 5.81 5.45 5.30 5.35 5.11 5.22 5.32 5.35 5.53 5.15 5.19 (e) ROUGE-2 40 50 60 70 80 90 100 150 200 L 30 20 19 18 17 16 15 14 13 12 11 10 k 18.93 18.82 18.84 18.56 18.68 18.76 18.66 18.52 18.89 18.64 18.70 18.74 18.50 18.81 18.56 18.46 18.87 18.28 18.62 19.03 18.46 19.00 18.45 19.02 18.43 19.07 18.76 18.98 18.60 18.23 18.31 18.54 18.38 18.26 18.18 18.27 18.78 18.51 18.28 18.33 18.06 18.51 18.44 18.39 18.45 18.38 18.46 18.14 18.38 18.41 18.49 18.38 18.51 18.46 18.38 18.60 18.33 18.47 18.51 18.49 18.41 18.67 18.45 18.87 18.41 18.55 18.49 18.47 18.57 18.61 18.38 18.45 18.51 18.48 18.28 18.43 18.65 18.07 18.45 18.33 18.43 18.78 18.90 18.73 18.95 18.69 18.33 18.57 18.90 18.51 18.32 18.31 18.48 18.29 18.42 18.51 18.09 18.87 18.21 18.38 18.24 18.32 18.15 18.37 18.35 18.47 18.08 18.36 (f) ROUGE-L Figure 2: Sensitivity analysis on hyper-parameters. Above row: Top-k opinion (k) vs merging threshold (θ); Bottom row: Top-k opinion (k) vs max token size (L). per summary and calculated the percentage of responses. 5797 Figure 3: Screenshot of Best-Worst Scaling Task. 5798 Figure 4: Screenshot of Content Support Task. Figure 5: Screenshot of Aspect-Specific Summary Task.
2020
513
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5799–5810 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5799 A Comprehensive Analysis of Preprocessing for Word Representation Learning in Affective Tasks Nastaran Babanejad, Ameeta Agrawal, Aijun An, Manos Papagelis Department of Electrical Engineering and Computer Science, York University, Toronto, Canada {nasba, ameeta, aan, papaggel}@eecs.yorku.ca Abstract Affective tasks such as sentiment analysis, emotion classification and sarcasm detection have been popular in recent years due to abundance of user-generated data, accurate computational linguistic models, and broad range of relevant applications in various domains. At the same time, many studies have highlighted the importance of text preprocessing, as an integral step to any natural language processing prediction model and downstream task. While preprocessing in affective systems is well-studied, preprocessing in word vector based models applied to affective systems, is not. To address this limitation, we conduct a comprehensive analysis of the role of preprocessing techniques in affective analysis based on word vector models. Our analysis is the first of its kind and provides useful insights of the importance of each preprocessing technique when applied at the training phase, commonly ignored in pretrained word vector models, and/or at the downstream task phase. 1 Introduction Affective tasks such as sentiment analysis, emotion classification and sarcasm detection have enjoyed great popularity in recent years. This success can be largely attributed to the fundamental and straightforward nature of the methods employed, the availability of vast amounts of user-generated natural language data, and the wide range of useful applications, spanning from hate speech detection to monitoring the sentiment of financial markets and news recommendation (Djuric et al., 2015; Babanejad et al., 2019). Most early models of affect analysis employed pretrained word embeddings that have been obtained under the assumption of the distributional hypothesis (Mikolov et al., 2013; Devlin et al., 2018). The distributional hypothesis suggests that two words occurring frequently in similar linguistic contexts tend to be more semantically similar, and therefore should be represented closer to one another in the embedding space. However, while such embeddings are useful for several natural language processing (NLP) downstream tasks, they are known to be less suitable for affective tasks in particular (Tang et al., 2014; Agrawal et al., 2018). Although some authors claim that there is a need for post-processing word embeddings for affective tasks, others find that off-theshelf vectors are very powerful for affective lexicon learning (Lison and Kutuzov, 2017). For example, word2vec (Mikolov et al., 2013) estimates the pair of words ‘happy’ and ‘sad’ to be more similar than the pair of words ‘happy’ and ‘joy’, which is counterintuitive, and might affect the accuracy performance of the models that depend on it. To address the limitations of traditional word embeddings, several techniques have been proposed, including task-specific fine-tuning (Devlin et al., 2018), retrofitting (Faruqui et al., 2014), representing emotion with vectors using a multi-task training framework (Xu et al., 2018) and generating affective word embeddings (Felbo et al., 2017), to name a few. Other attempts to overcome the limitation of word vectors include optimization of hyperparameters (Levy et al., 2015), as well as fine-tuned preprocessing strategies tailored to different NLP tasks. While these strategies have demonstrated evidence of improving the accuracy performance in tasks such as word similarity, word analogy, and others (Lison and Kutuzov, 2017), their effect in affective tasks has not received considerable attention and remains less explored. Our work is motivated by the observation that preprocessing factors such as stemming, stopwords removal and many others make up an integral part of nearly every improved text classification model, and affective systems in particular (Danisman and Alpkocak, 2008; Patil and Patil, 2013). However, little work has been 5800 Figure 1: Framework of applying preprocessing in different stages in affective systems; (a) Pre, (b) Post. done towards understanding the role of preprocessing techniques applied to word embeddings in different stages of affective systems. To address this limitation, the overarching goal of this research, is to perform an extensive and systematic assessment of the effect of a range of linguistic preprocessing factors pertaining to three affective tasks, including sentiment analysis, emotion classification and sarcasm detection. Towards that end, we systematically analyze the effectiveness of applying preprocessing to large training corpora before learning word embeddings, an approach that has largely been overlooked by the community. We investigate the following research questions: (i) what is the effect of integrating preprocessing techniques earlier into word embedding models, instead of later on in a downstream classification models? (ii) which preprocessing techniques yield the most benefit in affective tasks? (iii) does preprocessing of word embeddings provide any improvement over stateof-the-art pretrained word embeddings? and if yes, how much? Figure 1 illustrates the difference between a) preprocessing word embeddings pipeline (Pre) vs. b) preprocessing classification dataset pipeline (Post), where preprocessing techniques in (a) are applied to the training corpus of the model and in (b) only to the classification dataset. In brief, the main contributions of our work are as follows: • We conduct a comprehensive analysis of the role of preprocessing techniques in affective tasks (including sentiment analysis, emotion classification and sarcasm detection), employing different models, over nine datasets; • We perform a comparative analysis of the accuracy performance of word vector models when preprocessing is applied at the training phase (training data) and/or at the downstream task phase (classification dataset). Interestingly, we obtain best results when preprocessing is applied only to the training corpus or when it is applied to both the training corpus and the classification dataset of interest. • We evaluate the performance of our best preprocessed word vector model against state-ofthe-art pretrained word embedding models; • We make source code and data publicly available to encourage reproducibility of results1. The rest of the paper is organized as follows: Section 2 presents an overview of the related work. Section 3 elaborates on the preprocessing techniques employed in the evaluation of models. Section 4 describes the experimental evaluation framework. In Section 5 a comprehensive analysis of the results is provided. Section 6 concludes the paper with key insights of the research. 2 Related Work In this section, we present an overview of related work on preprocessing classification datasets and preprocessing word embeddings, and how our work aims to bridge the gap between those efforts. 2.1 Preprocessing Classification Datasets Preprocessing is a vital step in text mining and therefore, evaluation of preprocessing techniques has long been a part of many affective systems. Saif et al. (2014) indicated that, despite its popular use in Twitter sentiment analysis, the use of precompiled stoplist has a negative impact on the classification performance. Angiani et al. (2016) analyzed various preprocessing methods such as stopwords removal, stemming, negation, emoticons, and so on, and found stemming to be most effective for the task of sentiment analysis. Similarly, Symeonidis et al. (2018) found that lemmatization increases accuracy. Jianqiang and Xiaolin (2017) observed that removing stopwords, numbers, and URLs can reduce noise but does not affect performance, whereas replacing negation and expanding acronyms can improve the classification accuracy. 1https://github.com/NastaranBa/ preprocessing-for-word-representation 5801 Preprocessing techniques such as punctuation and negation (Rose et al., 2018) or pos-tagging and negation (Seal et al., 2020) make up a common component of many emotion classification models (Kim et al., 2018; Patil and Patil, 2013). One of the earliest works (Danisman and Alpkocak, 2008) preserved emotion words and negative verbs during stopwords removal, replaced punctuation with descriptive new words, replaced negative short forms with long forms, and concatenated negative words with emotion words to create new words (e.g., not happy →NOThappy ). Although stemming may remove the emotional meaning from some words, it has been shown to improve classification accuracy (Danisman and Alpkocak, 2008; Agrawal and An, 2012). Negations have also been found beneficial, whereas considering intensifiers and diminishers did not lead to any improvements (Strohm, 2017). Pecar et al. (2018) also highlight the importance of preprocessing when using user-generated content, with emoticons processing being the most effective. Along the same lines, while Gratian and Haid (2018) found pos-tags to be useful, Boiy et al. (2007) ignored pos-tagging because of its effect of reducing the classification accuracy The aforementioned works describe preprocessing techniques as applied directly to evaluation datasets in affective systems. In contrast, we examine the effectiveness of directly incorporating these known effective preprocessing techniques further “upstream” into the training corpus of word embeddings, which are widely used across a number of downstream tasks. 2.2 Preprocessing Word Embeddings Through a series of extensive experiments, particularly those related to context window size and dimensionality, (Levy et al., 2015) indicate that seemingly minor variations can have a large impact on the success of word representation methods in similarity and analogy tasks, stressing the need for more analysis of often ignored preprocessing settings. Lison and Kutuzov (2017) also present a systematic analysis of context windows based on a set of four hyperparameters, including window position and stopwords removal, where the right window was found to be better than left for English similarity task, and stopwords removal substantially benefited analogy task but not similarity. A general space of hyperparameters and preprocessing factors such as context window size (Hershcovich et al., 2019; Melamud et al., 2016), dimensionality (Melamud et al., 2016), syntactic dependencies (Levy and Goldberg, 2014; Vuli´c et al., 2020) and their effect on NLP tasks including word similarity (Hershcovich et al., 2019), tagging, parsing, relatedness, and entailment (Hashimoto et al., 2017) and biomedical (Chiu et al., 2016) has been studied extensively in the literature. The main conclusion of these studies, however, is that these factors are heavily task-specific. Therefore, in this work we explore preprocessing factors of generating word embeddings specifically tailored to affective tasks, which have received little attention. A recent study investigated the role of tokenizing, lemmatizing, lowercasing and multiword grouping (Camacho-Collados and Pilehvar, 2018) as applied to sentiment analysis and found simple tokenization to be generally adequate. In the task of emotion classification, Mulki et al. (2018) examined the role of four preprocessing techniques as applied to a vector space model based on tf-idf trained on a small corpus of tweets, and found stemming, lemmatization and emoji tagging to be the most effective factors. Distinct from prior works, we examine a much larger suite of preprocessing factors grounded in insights derived from numerous affective systems, trained over two different corpora, using three different word embedding models. We evaluate the effect of the preprocessed word embeddings in three distinct affective tasks including sentiment analysis, emotion classification and sarcasm detection. 3 Preprocessing in Affective Systems This section describes the preprocessing factors applied to the training corpus that is then used to generate word representations and the order of the preprocessing factors which we need to follow when applying on the corpus. 3.1 Preprocessing Factors Basic: A group of common text preprocessing applied at the very beginning, such as removing html tags, removing numbers, and lowercasing. This step removes all common punctuation from text, such as “@%*=()/ +” using the NLTK regexptokenizer2. Spellcheck (spell): A case can be made for either correcting misspellings and typos or leaving 2https://www.nltk.org/ modules/nltk/tokenize/regexp.html 5802 them as is assuming they represent natural language text and its associated complexities. In this step, we identify words that may have been misspelled and correct them3. As unambiguous spell corrections are not very common and in most cases we have multiple options for correction, we built our own custom dictionary to suggest a replacement by parsing the ukWac corpora4 to retrieve a wordfrequency list. A misspelled word that has multiple replacements is replaced with the suggested word that has the maximum frequency in the corpora. Negation (neg): Negation is a mechanism that transforms a positive argument into its inverse rejection (Benamara et al., 2012). Specifically in the task of affective analysis, negation plays a critical role as negation words can affect the word or sentence polarity causing the polarity to invert in many cases. Our negation procedure is as follows: (i) Compilation of an antonym dictionary: The first stage involves compiling an antonym dictionary using the WordNet corpus (Miller, 1995). For every synset, there are three possibilities: finding no antonym, one antonym or multiple antonyms. The first two cases are trivial (unambiguous replacements). In the case of the third option (ambiguous replacement), which represents the most common case, amongst the many choices, we consider the antonym with the maximum frequency in the ukWac corpus, as described in the previous section and finally the antonym of a word is picked at random from one of its senses in our antonym dictionary. (ii) Negation handler: Next, we identify the negation words in tokenized text5. If a negation word is found, the token following it (i.e., negated word) is extracted and its antonym looked up in the antonym dictionary. If an antonym is found, the negation word and the negated word are replaced with it. For example, let the sentence “I am not happy today” in its tokenized form [‘I’, ‘am’, ‘not’, ‘happy’, ‘today’]. First, we identify any negation words (i.e., ‘not’) and their corresponding negated words (i.e., ‘happy’). Then, we look up the antonym of ‘happy’ in the antonym dictionary (i.e., ‘sad’) and replace the phrase ‘not happy’ with the word ‘sad’, resulting in a new sentence “I am sad today”. Parts-of-Speech (pos): Four parts-of-speech 3https://pypi.org/project/pyspellchecker/ 4https://www.sketchengine.eu/ukwac-british-englishcorpus/ 5https://pypi.org/project/negspacy/ classes, namely nouns, verbs, adjectives and adverbs have been shown to be more informative with regards to affect than the other classes. Thus, using the NLTK pos-tagger, for each sentence in the corpus we retain only the words belonging to one of these four classes, i.e., NN*, JJ*, VB*, and RB*. Stopwords (stop): Stopwords are generally the most common words in a language typically filtered out before classification tasks. Therefore, we remove all the stopwords using the NLTK library. Stemming (stem): Stemming, which reduces a word to its root form, is an essential preprocessing technique in NLP tasks. We use NLTK Snowball stemmer for stemming our training corpus. 3.2 Order of Preprocessing Factors While some preprocessing techniques can be applied independently of each other (e.g., removing stopwords and removing punctuation), others need a more careful consideration of the sequence in which they are applied in order to obtain a more stable result. For instance, pos-tagging should be applied before stemming in order for the tagger to work well, or negation should be performed prior to removing stopwords. To this end, we consider the following ordering when combining all the aforementioned preprocessing factors: spellchecking, negation handling, pos classes, removing stopwords, and stemming. 4 Experimental Evaluation Framework 4.1 Training Corpora Table 1 summarizes the details of our two training corpora with regards to their vocabulary and corpus sizes after applying various preprocessing settings. For some preprocessing such as POS (pos) and stopwords removal (stop), without any significant loss in vocabulary as indicated by the % ratio of preprocessed to basic, the corpus size reduces dramatically, in some cases more than 50%, a nontrivial implication with regards to training time. News: This corpus consists of 142,546 articles from 15 American publications, spanning from 2013 to early 20186. Wikipedia: Comparatively a much larger corpus than the News, this corpus consists of 23,046,187 articles from Wikipedia 7. 6https://www.kaggle.com/snapcrack/all-the-news 7https://www.kaggle.com/jkkphys/english-wikipediaarticles-20170820-sqlite 5803 Corpus Processing Vocab Corpus size % size % News Basic 155K 100 123.2M 100 spell 149K 96 123.2M 100 stem 137K 88 123.2M 100 punc 147K 95 111.0M 90 neg 152K 98 90.7M 73 stop 150K 97 75.6M 61 pos 154K 99 70.7M 57 All - punc 151K 97 93.7M 76 All - pos 140K 90 90.5M 73 All - stop 150K 97 75.3M 61 All 110K 71 55.2M 49 All - stem 110K 71 58.1M 47 All - spell 110K 71 56.4M 46 All - neg 110K 71 54.3M 44 Wikipedia Basic 5.1M 100 8.1B 100 All - punc 4.9M 96 7.2B 89 All - pos 4.8M 94 7.0B 86 All - stop 4.9M 96 6.8B 84 All - stem 4.3M 84 6.4B 79 All - spell 4.6M 90 6.1B 75 All 4.6M 90 5.6B 69 All - neg 4.6M 90 5.0B 62 Table 1: Details of training corpora Dataset Genre Task Total IMDB reviews sentiment 50,000 SemEval tweets sentiment 14,157 Airline tweets sentiment 11,541 ISEAR narratives emotions 5,477 Alm fairy tales emotions 1,206 SSEC tweets emotions 1,017 Onion headlines sarcasm 28,619 IAC response sarcasm 3,260 Reddit comments sarcasm 1,010,826 Table 2: Details of evaluation datasets 4.2 Word Embedding Models We obtain our preprocessed word representations through three models: (i) CBOW (Continuous Bag-of-Words), (ii) Skip-gram: While CBOW takes the context of each word as the input and tries to predict the word corresponding to the context, skip-gram reverses the use of target and context words, where the target word is fed at the input and the output layer of the neural network is replicated multiple times to accommodate the chosen number of context words (Mikolov et al., 2013). We train both the models on both the training corpora using min count of 5 for News and 100 for Wikipedia with window sizes of 5 and 10, respectively, setting dimensionality to 300. (iii) BERT (Bidirectional Encoder Representations from Transformers): BERT is an unsupervised method of pretraining contextualized language representations (Devlin et al., 2018). We train the model using BERT large uncased architecture (24-layer, 1024-hidden, 16-heads, 340M parameters) with same setting for parameters as the original paper. We train each of the three models (CBOW, Skipgram and BERT) 8 times using 16 TPUs (64 TPU chips), Tensorflow 1.15, 1TB memory on Google Cloud and two 32 GPUs cluster of V100/RTX 2080 Ti, 1TB memory using Microsoft CNTK parallelization algorithm8 on Amazon server. For a large model such as BERT, it takes upto 4-5 days for each run of the training. 4.3 Evaluation Datasets We conduct our evaluation on three tasks, namely sentiment analysis, emotion classification and sarcasm detection. Table 2 presents the details of our evaluation datasets, and some illustrative examples of text are shown in Table 3. Sentiment Analysis: This popular task involves classifying text as positive or negative, and we use the following three datasets for evaluation: (i) IMDB: This dataset9 includes 50,000 movie reviews for sentiment analysis, consisting of 25,000 negative and 25,000 positive reviews Maas et al. (2011). (ii) Semeval 2016: This sentiment analysis in Twitter dataset10 consists of 14,157 tweets where 10,076 of them are positive and 4,081 negative Nakov et al. (2016). (iii) Airlines: This sentiment analysis dataset11 consists of 11,541 tweets about six U.S. airlines from February 2015, with 9,178 tweets labeled as positive and 2,363 negative. Emotion Classification: A multiclass classification task, this involves classifying text into a number of emotion categories such as happy, sad, and so on. The following datasets are used in our evaluation: (i) SSEC: The Stance Sentiment Emotion Corpus Schuff et al. (2017) is the re-annotation of the SemEval 2016 Twitter stance and sentiment corpus Mohammad et al. (2017) with emotion labels including anger, joy, sadness, fear, surprise. 12. (ii) ISEAR: This dataset contains narratives of personal experiences evoking emotions Wallbott and Scherer (1986). We use a subset of the data consisting of five categories: sadness, anger, disgust, fear, joy. (iii) Alm: This dataset contains sentences 8https://docs.microsoft.com/en-us/cognitivetoolkit/multiple-gpus-and-machines 9http://ai.stanford.edu/ amaas/data/sentiment/ 10http://alt.qcri.org/semeval2016/task4/index.php 11https://www.kaggle.com/crowdflower/twitter-airlinesentiment 12SSEC: http://www.romanklinger.de/ssec/ 5804 Text Label Dataset · I must admit that this is one of the worst movies I’ve ever seen. I thought Dennis Hopper had a little more taste than to appear in this kind of yeeeecchh... [truncated] negative IMDB · everything was fine until you lost my bag. negative Airline · At work, when an elderly man complained unjustifiably about me and distrusted me. anger ISEAR · The ladies danced and clapped their hands for joy. happy Alm · if this heat is killing me i don’t wanna know what the poor polar bears are going through sadness SSEC · ford develops new suv that runs purely on gasoline sarcastic Onion · Been saying that ever since the first time I heard about creationsism not-sarcastic IAC · Remember, it’s never a girl’s fault, it’s always the man’s fault. sarcastic Reddit Table 3: Examples of text instances in the evaluation datasets from fairy tales marked with one of five emotion categories: angry-disgusted, fearful, happy, sad and surprised Cecilia and Ovesdotter (2008). Sarcasm Detection: Detecting sarcasm from text, a challenging task due to the sophisticated nature of sarcasm, involves labeling text as sarcastic or not. We use the following three datasets: (i) Onion: This news headlines dataset 13 collected sarcastic versions of current events from The Onion and non-sarcastic news headlines from HuffPost Misra and Arora (2019), resulting in a total 28,619 records. (ii) IAC: A subset of the Internet Argument Corpus Oraby et al. (2016), this dataset contains response utterances annotated for sarcasm. We extract 3260 instances from the general sarcasm type.14. (iii) Reddit: Self-Annotated Reddit Corpus (SARC)15 is a collection of Reddit posts where sarcasm is labeled by the author in contrast to other datasets where the data is typically labeled by independent annotators Khodak et al. (2017). 4.4 Classification Setup For classification, we employ the LSTM model as it works well with sequential data such as text. For binary classification, such as sentiment analysis and sarcasm detection, the loss function used is the binary cross-entropy along with sigmoid activation: ξ = −1 N N X i=1 yilog(p(yi))+(1−yi)log(1−p(yi)) where y is the binary representation of true label, p(y) is the predicted probability, and i denotes the ith training sample. For multiclass emotion classification, the loss function used is categorical cross-entropy loss over a batch of N instances and k classes, along with softmax activation: 13https://github.com/rishabhmisra/News-HeadlinesDataset-For-Sarcasm-Detection 14https://nlds.soe.ucsc.edu/sarcasm2 15SARC v0.0: https://nlp.cs.princeton.edu/SARC/0.0/ ξ = −1 N N X i=1 k X j=1 yijlog (p(yij)) where p(y) is the predicted probability distribution, p(yij) ∈[0, 1]. The optimizer is Adam Kingma and Ba (2014), all loss functions are sample-wise, and we take the mean of all samples (epoch = 5, 10, batch size = 64, 128). All sentiment and sarcasm datasets are split into training/testing using 80%/20%, with 10% validation from training. For the smaller and imbalanced emotion datasets, we use stratified 5fold cross-validation. We use a dropout layer to prevent overfitting by ignoring randomly selected neurons during training. We use early stopping when validation loss stops improving with patience = 3, min-delta = 0.0001. The results are reported in terms of weighted F-score (as some emotion datasets are highly imbalanced), where F-score = 2 p.r p+r, with p denoting precision, and r is recall. 5 Discussion and Analysis We analyze the impact of preprocessing techniques in word representation learning on affect analysis. 5.1 Effect of Preprocessing Factors A primary goal of this work is to identify the most effective preprocessing factors for training word embeddings for affective tasks. Table 4 details the results of our experiments comparing the performance of individual preprocessing factors as well as those of ablation studies (i.e., including all the factors but one). Observing the performance of the individual factors on the News corpus, we note that even a single simple preprocessing technique can bring improvements, thereby validating our intuition of incorporating preprocessing into training corpora of word representations. Second, negation (neg) processing appears to be consistently the most 5805 Models Processing IMDB Semeval Airline IAC Onion Reddit Alm ISEAR SSEC CBOW Basic 83.99 55.69 60.73 65.74 68.23 59.42 36.81 55.43 51.76 stop 84.43 55.72 61.37 66.03 68.17 59.27 36.81 56.01 52.33 spell 86.20 55.93 61.96 66.00 69.57 60.00 36.88 56.41 52.14 stem 86.92 55.72 61.86 65.89 68.49 59.72 36.94 55.84 51.89 punc 86.99 56.41 62.08 65.93 69.85 60.28 36.94 56.89 52.03 pos 85.66 56.83 62.75 66.32 70.25 60.63 37.02 57.04 53.19 neg 88.98 57.29 63.81 66.87 71.12 60.91 37.22 57.39 54.15 All 89.96 57.82 64.58 67.23 70.90 60.84 37.43 57.72 53.71 All - neg 84.67 55.00 61.58 66.02 69.73 59.94 36.91 55.89 51.94 All - pos 85.69 56.31 64.29 66.97 70.48 60.15 37.19 56.27 52.16 All - punc 86.41 56.88 63.01 66.75 70.01 60.00 37.01 57.19 52.43 All - spell 88.23 56.41 63.87 67.23 70.83 60.27 37.22 57.41 53.41 All - stop 90.01 60.82 66.84 67.20 72.49 62.11 38.96 59.28 55.00 All - stem 88.12 60.82 67.12 69.25 72.13 61.73 38.00 59.00 55.42 Skip-gram Basic 83.07 54.23 61.47 65.51 68.01 59.75 35.87 55.64 51.49 stop 83.23 55.47 62.00 65.62 68.00 59.84 35.94 55.76 51.62 spell 85.90 55.48 62.00 65.61 69.76 60.28 36.10 55.93 52.30 stem 86.00 55.33 61.89 65.60 68.72 59.50 36.00 55.69 51.40 punc 86.68 55.79 62.38 65.89 70.00 60.44 36.41 56.81 52.71 pos 85.91 56.28 63.25 66.24 69.81 60.85 36.44 56.23 52.94 neg 87.28 56.89 63.72 66.87 70.59 61.27 36.87 57.34 53.10 All 88.36 57.04 64.91 66.94 70.73 61.12 37.10 57.92 53.58 All - neg 83.26 54.00 61.95 66.00 69.88 60.00 36.94 55.97 51.89 All - pos 86.21 55.22 65.12 66.06 69.88 61.00 37.00 56.42 52.10 All - punc 85.57 55.99 64.29 66.29 70.00 60.98 37.01 57.02 52.53 All - spell 86.00 56.98 65.00 66.25 70.25 0.61 37.04 57.69 52.86 All - stop 88.74 60.93 67.00 68.57 72.20 62.02 38.92 59.18 55.18 All - stem 88.42 60.67 67.39 69.08 72.00 62.36 37.44 59.48 55.23 Table 4: F-score results of evaluating the effect of preprocessing factors using CBOW and Skip-gram on News corpus. The overall best results are in bold. The best result using only any one preprocessing setting is underlined. effective factor across all the 9 datasets, indicating its importance in affective classification, followed by parts-of-speech (pos) processing where we retained words belonging only to one of four classes. On the other hand, removing stopwords (stop), spellchecking (spell) and stemming (stem) yield little improvement and mixed results. Interestingly, applying all the preprocessing factors is barely better or in some cases even worse (Onion, Reddit and SSEC) than applying just negation. Finally, the best performance comes from combining all the preprocessing factors except stemming (All-stem). Moreover, Table 5 details the performance of ablation studies on Wikipedia corpus for all three models where we note that the best performance for the CBOW model comes from combining all the preprocessing factors except stemming (All-stem), whereas for the Skip-gram and BERT models, the best results are obtained by applying all the preprocessing factors except stopwords removal (All-stop). Considering that the Wikipedia corpus is almost 160 times bigger than the News corpus, it is unsurprising that the word embeddings obtained from the former yield considerably better results, consistent across all nine datasets. 5.2 Evaluating Preprocessing Training Corpora for Word Vectors vs. Preprocessing Classification Data We investigate the difference between applying preprocessing to the training corpora for generating word embeddings (Pre) and applying preprocessing to the classification datasets (Post). As an example, during Pre, we first apply the preprocessing techniques (e.g., all but stemming) to the training corpus (e.g., Wikipedia), then generate word embeddings, then convert a classification dataset (e.g., IMDB) into word embedding representation, and finally classify using LSTM. Conversely, for Post, we first generate word embeddings from a training corpus (e.g., Wikipedia), then apply the preprocessing techniques (e.g., all but stemming) to the classification dataset (e.g., IMDB), which is then converted to word vector representation, and finally classified using LSTM 16. The results of this experiment are presented in Table 6, where we observe that incorporating preprocessing into the training corpora before generat16Note: For settings including stemming, the classification data is also stemmed in order to obtain a compatible vocabulary. 5806 Models Processing IMDB Semeval Airline IAC Onion Reddit Alm ISEAR SSEC CBOW Basic 84.91 56.89 68.11 69.15 71.02 63.58 45.22 59.73 55.84 All 88.41 60.25 71.39 71.57 73.61 65.27 48.81 62.48 57.42 All - neg 83.02 56.03 69.28 69.55 70.25 64.18 46.00 60.42 55.93 All - pos 85.69 57.21 71.00 70.08 72.29 64.82 47.53 62.28 56.25 All - punc 84.00 57.36 70.46 70.01 72.02 65.00 47.68 61.84 56.64 All - spell 86.19 58.26 70.98 70.59 72.85 65.00 47.29 61.63 57.00 All - stop 91.10 61.00 73.00 72.31 74.50 68.20 52.39 64.29 58.46 All - stem 88.76 62.19 73.25 72.36 75.69 68.53 50.28 65.33 59.28 Skip-gram Basic 84.00 55.94 68.36 69.20 71.68 63.74 45.01 59.45 55.62 All 87.00 59.99 71.29 71.25 73.82 65.67 48.51 65.02 57.13 All - neg 84.97 56.11 69.00 70.17 70.04 64.55 46.28 60.54 55.86 All - pos 86.21 57.62 70.25 70.85 73.22 65.47 47.49 63.44 56.00 All - punc 85.00 57.20 70.00 70.77 72.00 65.00 47.10 61.72 56.49 All - spell 85.75 58.49 70.26 70.89 72.63 65.18 47.14 61.25 56.84 All - stop 89.76 61.74 72.19 72.00 75.69 68.29 52.01 64.00 58.14 All - stem 89.66 60.28 73.66 71.98 75.24 68.72 51.39 63.44 59.01 BERT Basic 90.11 70.82 90.23 71.19 76.30 59.74 57.81 65.70 65.39 All 91.86 71.76 91.73 73.66 78.72 62.60 59.74 67.80 67.49 All - neg 90.33 70.52 91.04 72.00 77.07 61.44 58.14 66.59 66.10 All - pos 91.01 71.20 91.66 73.31 78.45 62.04 59.01 66.25 68.13 All - punc 91.59 71.50 91.60 73.18 78.54 62.27 59.60 67.25 67.27 All - spell 91.78 71.13 91.34 73.02 78.40 62.00 59.44 67.21 67.30 All - stop 94.18 73.81 94.85 75.80 79.10 65.39 60.73 69.33 69.81 All - stem 92.19 71.94 92.03 74.49 77.93 63.74 60.16 68.00 67.05 Table 5: F-score results of evaluating the effect of preprocessing factors using different models on Wikipedia corpus. The overall best results are shown in bold. Models Processing IMDB Semeval Airline IAC Onion Reddit Alm ISEAR SSEC CBOW Post 87.49 59.33 71.28 69.87 74.20 67.13 47.19 62.00 56.27 Pre 88.76 62.19 73.25 72.36 75.69 68.53 50.28 65.33 59.28 Both 88.10 62.41 73.00 71.86 75.00 70.10 50.39 64.52 58.20 Skip-gram Post 88.14 60.41 71.85 70.22 75.07 67.00 50.44 62.08 56.00 Pre 89.76 61.74 72.19 72.00 75.69 68.29 52.01 64.00 58.14 Both 89.33 61.25 73.58 71.62 75.48 68.74 51.68 65.29 58.03 BERT Post 94.58 70.25 92.35 74.69 77.10 63.38 58.40 68.20 67.17 Pre 94.18 73.81 94.85 75.80 79.10 65.39 60.73 69.33 69.81 Both 94.63 72.41 93.00 75.19 78.69 65.17 60.33 69.06 68.43 Table 6: F-score results of evaluating the effect of preprocessing word embeddings training corpus vs. preprocesssing evaluation datasets ing word vectors (Pre) outperforms preprocessing classification datasets (Post) across all nine datasets of the three affective tasks. Interestingly though, preprocessing both the bodies of text (Both) appears to be of little benefit, suggesting the importance of preprocessing training corpora used for obtaining word embeddings. 5.3 Evaluating Proposed Model against State-of-the-art Baselines While not a primary focus of this paper, in this final experiment we compare the performance of our preprocessed word embeddings against those of six state-of-the-art pretrained word embeddings17. 17These vectors obtained from their original repositories have been used without any modifications. (i) GloVe: Global vectors for word representations (Pennington et al., 2014) were trained on aggregated global word co-occurrences. We use the vectors trained on GloVe6B 6 billion words18, uncased, from Wikipedia and Gigaword. (ii) SSWE: Sentiment Specific Word Embeddings (unified model)19 were trained using a corpus of 10 million tweets to encode sentiment information into the continuous representation of words (Tang et al., 2014). (iii) FastText: These pretrained word vectors20, based on sub-word character n-grams were trained on Wikipedia using fastText (Bojanowski et al., 2017), an extension of the word2vec model. 18https://nlp.stanford.edu/projects/glove/ 19http://ir.hit.edu.cn/˜dytang/paper/sswe/embeddingresults.zip 20https://github.com/facebookresearch/fastText 5807 Models IMDB Semeval Airline IAC Onion Reddit Alm ISEAR SSEC GloVe 85.64 70.29 70.21 70.19 71.39 63.57 56.21 65.30 58.40 SSWE 80.45 69.27 78.29 64.85 52.74 50.73 51.00 54.71 52.18 FastText 75.26 68.55 70.69 55.74 58.29 59.37 52.28 25.40 53.20 DeepMoji 69.79 62.10 71.03 65.67 70.90 53.08 46.33 58.20 58.90 EWE 71.28 60.27 67.81 67.43 70.06 55.02 58.33 66.09 58.94 Our best results: CBOW 91.10 62.19 73.25 72.36 75.69 68.53 52.39 65.33 59.28 Skip-gram 89.76 61.74 73.66 72.00 75.69 68.72 52.01 65.02 59.01 BERT 94.18 73.81 94.85 75.80 79.10 65.39 60.73 69.33 69.81 Table 7: F-score results of comparing against state-of-the-art word embeddings. The best score is highlighted in bold, and the second best result is underlined. (iv) DeepMoji: These word embeddings21 were trained using BiLSTM on 1.2 billion tweets with emojis (Felbo et al., 2017). (v) EWE: Emotionenriched Word Embeddings22 were learned on 200,000 Amazon product reviews corpus using an LSTM model (Agrawal et al., 2018). From the results in Table 7, we notice that BERT is best on eight out of nine datasets except one sarcasm dataset (Reddit), while word2vec CBOW is the second best on four datasets. Overall, our analysis suggests that preprocessing at word embedding stage (Pre) works well for all the three affective tasks. 5.4 Analyzing the Three Affective Tasks Figure 2 summarizes the results obtained for all three tasks in terms of (a) absolute F-scores and (b) relative improvement (best preprocessing over Basic preprocessing). The IMDB dataset achieves the highest F-score overall, most likely because it consists of movie reviews which are much longer than the text from other genres. As expected, the binary classification task of sentiment analysis and sarcasm detection achieve comparable results, while the multiclass emotion classification typically has much lower F-scores. The most interesting observation, however, is noticed in Fig. 2(b) where the emotion datasets show the highest relative improvement, indicating that multiclass classification tasks may benefit the most from applying preprocessing at word embedding stage (Pre). 6 Conclusions We systematically examined the role of preprocessing training corpora used to induce word representations for affect analysis. While all preprocessing techniques improved performance to a certain ex21https://github.com/bfelbo/DeepMoji 22https://www.dropbox.com/s/wr5ovupf7yl282x/ewe uni.txt Figure 2: Absolute F-scores vs. relative improvement tent, our analysis suggests that the most noticeable increase is obtained through negation processing (neg). The overall best performance is achieved by applying all the preprocessing techniques, except stopwords removal (All-stop). Interestingly, incorporating preprocessing into word representations appears to be far more beneficial than applying it in a downstream task to classification datasets. Moreover, while all the three affective tasks (sentiment analysis, sarcasm detection and emotion classification) benefit from our proposed preprocessing framework, our analysis reveals that the multiclass emotion classification task benefits the most. Exploring the space of subsets of our preprocessing factors might yield more interesting combinations; we leave this for future work. Acknowledgements We thank the anonymous reviewers for their insightful comments. This work is funded by Natural Sciences and Engineering Research Council of Canada (NSERC) and the Big Data Research, Analytics, and Information Network (BRAIN) Alliance established by the Ontario Research Fund Research Excellence Program (ORF-RE). In particular, we thank Majid Taghdimi from Questrade to provide us with the computing resources and help in the parallelization algorithm. We would also like to thank Dr. Heidar Davoudi for the helpful discussions and insights in this project. 5808 References Ameeta Agrawal and Aijun An. 2012. Unsupervised emotion detection from text using semantic and syntactic relations. In Proceedings of the The 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent TechnologyVolume 01, pages 346–353. IEEE Computer Society. Ameeta Agrawal, Aijun An, and Manos Papagelis. 2018. Learning emotion-enriched word representations. In Proceedings of the 27th International Conference on Computational Linguistics, pages 950– 961. Giulio Angiani, Laura Ferrari, Tomaso Fontanini, Paolo Fornacciari, Eleonora Iotti, Federico Magliani, and Stefano Manicardi. 2016. A comparison between preprocessing techniques for sentiment analysis in twitter. In Proceedings of the 2nd International Workshop on Knowledge Discovery on the WEB, KDWeb. Nastaran Babanejad, Ameeta Agrawal, Heidar Davoudi, Aijun An, and Manos Papagelis. 2019. Leveraging emotion features in news recommendations. In Proceedings of the 7’th International Workshop on News Recommendation and Analytics (INRA’19) in conjunction with RecSys’19, Copenhagen, Denmark, September 16 - 20, 2019. Farah Benamara, Baptiste Chardon, Yannick Mathieu, Vladimir Popescu, and Nicholas Asher. 2012. How do negation and modality impact on opinions? In Proceedings of the Workshop on ExtraPropositional Aspects of Meaning in Computational Linguistics, ExProM ’12, pages 10–18, Stroudsburg, PA, USA. Association for Computational Linguistics. Erik Boiy, Pieter Hens, Koen Deschacht, and MarieFrancine Moens. 2007. Automatic sentiment analysis in on-line text. In Proceedings of the 11th International Conference on Electronic Publishing ELPUB2007. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Jose Camacho-Collados and Mohammad Taher Pilehvar. 2018. On the role of text preprocessing in neural network architectures: An evaluation study on text categorization and sentiment analysis. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics. Ebba Cecilia and Alm Ovesdotter. 2008. Affect in text and speech. ProQuest, Citeseer. Billy Chiu, Gamal Crichton, Anna Korhonen, and Sampo Pyysalo. 2016. How to train good word embeddings for biomedical nlp. In Proceedings of the 15th workshop on biomedical natural language processing, pages 166–174. Taner Danisman and Adil Alpkocak. 2008. Feeler: Emotion classification of text using vector space model. In Proceedings of the AISB 2008 Symposium on Affective Language in Human and Machine, AISB 2008 Convention Communication, Interaction and Social Intelligence, volume 1, page 53. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Nemanja Djuric, Jing Zhou, Robin Morris, Mihajlo Grbovic, Vladan Radosavljevic, and Narayan Bhamidipati. 2015. Hate speech detection with comment embeddings. In Proceedings of the 24th International Conference on World Wide Web, WWW ’15 Companion, page 29–30, New York, NY, USA. Association for Computing Machinery. Manaal Faruqui, Jesse Dodge, Sujay K Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2014. Retrofitting word vectors to semantic lexicons. arXiv preprint arXiv:1411.4166. Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 International Conference on Empirical Methods in Natural Language Processing (EMNLP). Vachagan Gratian and Marina Haid. 2018. Braint at iest 2018: Fine-tuning multiclass perceptron for implicit emotion classification. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 243–247. Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint manytask model: Growing a neural network for multiple NLP tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1923–1933, Copenhagen, Denmark. Association for Computational Linguistics. Daniel Hershcovich, Assaf Toledo, Alon Halfon, and Noam Slonim. 2019. Syntactic interchangeability in word embedding models. arXiv preprint arXiv:1904.00669. Zhao Jianqiang and Gui Xiaolin. 2017. Comparison research on text pre-processing methods on twitter sentiment analysis. IEEE Access, 5:2870–2879. Mikhail Khodak, Nikunj Saunshi, and Kiran Vodrahalli. 2017. A large self-annotated corpus for sarcasm. arXiv preprint arXiv:1704.05579. 5809 Yanghoon Kim, Hwanhee Lee, and Kyomin Jung. 2018. AttnConvnet at SemEval-2018 task 1: Attentionbased convolutional neural networks for multi-label emotion classification. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 141–145, New Orleans, Louisiana. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Omer Levy and Yoav Goldberg. 2014. Dependencybased word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 2, pages 302–308. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225. Pierre Lison and Andrey Kutuzov. 2017. Redefining context windows for word embedding models: An experimental study. In Proceedings of the 21st Nordic Conference on Computational Linguistics (NoDaLiDa), pages 284–288, Gothenburg, Sweden. Association for Computational Linguistics. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics. Oren Melamud, David McClosky, Siddharth Patwardhan, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embeddings. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1030–1040, San Diego, California. Association for Computational Linguistics. Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. George A. Miller. 1995. Wordnet: A lexical database for english. Association for Computing Machinery, Commun. ACM, 38(11):39–41. Rishabh Misra and Prahal Arora. 2019. Sarcasm detection using hybrid neural network. arXiv preprint arXiv:1908.07414. Saif M Mohammad, Parinaz Sobhani, and Svetlana Kiritchenko. 2017. Stance and sentiment in tweets. ACM Transactions on Internet Technology (TOIT), 17(3):26. Hala Mulki, Chedi Bechikh Ali, Hatem Haddad, and Ismail Babao˘glu. 2018. Tw-star at semeval-2018 task 1: Preprocessing impact on multi-label emotion classification. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 167–171. Preslav Nakov, Alan Ritter, Sara Rosenthal, Veselin Stoyanov, and Fabrizio Sebastiani. 2016. SemEval2016 task 4: Sentiment analysis in Twitter. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval ’16, San Diego, California. Association for Computational Linguistics. Shereen Oraby, Vrindavan Harrison, Lena Reed, Ernesto Hernandez, Ellen Riloff, and Marilyn Walker. 2016. Creating and characterizing a diverse corpus of sarcasm in dialogue. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 31–41, Los Angeles. Association for Computational Linguistics. Chaitali G. Patil and Sandip Patil. 2013. Use of porter stemming algorithm and svm for emotion extraction from news headlines. In International Journal of Electronics, Communication and Soft Computing Science and Engineering. Samuel Pecar, Michal Farkas, Marian Simko, Peter Lacko, and Maria Bielikova. 2018. Nl-fiit at iest2018: Emotion recognition utilizing neural networks and multi-level preprocessing. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 217–223. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. S Lovelyn Rose, R Venkatesan, Girish Pasupathy, and P Swaradh. 2018. A lexicon-based term weighting scheme for emotion identification of tweets. International Journal of Data Analysis Techniques and Strategies, 10(4):369–380. Hassan Saif, Miriam Fernandez, Yulan He, and Harith Alani. 2014. On stopwords, filtering and data sparsity for sentiment analysis of twitter. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 810–817, Reykjavik, Iceland. European Language Resources Association (ELRA). Hendrik Schuff, Jeremy Barnes, Julian Mohme, Sebastian Pad´o, and Roman Klinger. 2017. Annotation, modelling and analysis of fine-grained emotions on a stance and sentiment detection corpus. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 13–23. Dibyendu Seal, Uttam K Roy, and Rohini Basak. 2020. Sentence-level emotion detection from text based 5810 on semantic rules. In Information and Communication Technology for Sustainable Development, pages 423–430. Springer. Florian Strohm. 2017. The impact of intensifiers, diminishers and negations on emotion expressions. B.S. thesis, University of Stuttgart. Symeon Symeonidis, Dimitrios Effrosynidis, and Avi Arampatzis. 2018. A comparative evaluation of preprocessing techniques and their interactions for twitter sentiment analysis. Expert Systems with Applications, 110:298–310. Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentimentspecific word embedding for twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1555–1565, Baltimore, Maryland. Association for Computational Linguistics. Ivan Vuli´c, Simon Baker, Edoardo Maria Ponti, Ulla Petti, Ira Leviant, Kelly Wing, Olga Majewska, Eden Bar, Matt Malone, Thierry Poibeau, Roi Reichart, and Anna Korhonen. 2020. Multi-simlex: A largescale evaluation of multilingual and cross-lingual lexical semantic similarity. Harald G Wallbott and Klaus R Scherer. 1986. How universal and specific is emotional experience? evidence from 27 countries on five continents. Information (International Social Science Council), 25(4):763–795. Peng Xu, Andrea Madotto, Chien-Sheng Wu, Ji Ho Park, and Pascale Fung. 2018. Emo2Vec: Learning generalized emotion representation by multitask training. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 292–298, Brussels, Belgium. Association for Computational Linguistics.
2020
514
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5811–5820 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5811 Diverse and Informative Dialogue Generation with Context-Specific Commonsense Knowledge Awareness Sixing Wu1 , Ying Li2∗, Dawei Zhang1 , Yang Zhou3 and Zhonghai Wu2 1School of Electronics Engineering and Computer Science, Peking University, Beijing, 100871, China 2National Research Center of Software Engineering, Peking University, Beijing, 100871, China 3Auburn University, Auburn, Alabama, 36849, USA {wusixing, li.ying, daweizhang}@pku.edu.cn [email protected], [email protected] Abstract Generative dialogue systems tend to produce generic responses, which often leads to boring conversations. For alleviating this issue, Recent studies proposed to retrieve and introduce knowledge facts from knowledge graphs. While this paradigm works to a certain extent, it usually retrieves knowledge facts only based on the entity word itself, without considering the specific dialogue context. Thus, the introduction of the context-irrelevant knowledge facts can impact the quality of generations. To this end, this paper proposes a novel commonsense knowledge-aware dialogue generation model, ConKADI. We design a Felicitous Fact mechanism to help the model focus on the knowledge facts that are highly relevant to the context; furthermore, two techniques, Context-Knowledge Fusion and Flexible Mode Fusion are proposed to facilitate the integration of the knowledge in the ConKADI. We collect and build a large-scale Chinese dataset aligned with the commonsense knowledge for dialogue generation. Extensive evaluations over both an open-released English dataset and our Chinese dataset demonstrate that our approach ConKADI outperforms the state-of-the-art approach CCM, in most experiments. 1 Introduction Nowadays, open-domain dialogue response generation systems have shown impressive potential, to endow a machine with the ability to converse with a human, using natural language (Chen et al., 2017). Although such models have achieved promising performance, they still suffer from generating generic and boring responses, such as ”I don’t know.” Such low-quality responses always reduce the attractiveness of generative dialogue systems to end-users. Researchers have tried to ∗Corresponding author: Ying Li, [email protected] tackle it from multiple aspects; for example, using the enhanced objective function (Li et al., 2016a); introducing additional contents (Xu et al., 2019). However, these methods haven’t solved the issue thoroughly. Different from a human being, who is capable of associating the dialogue with the background knowledge in his/her mind, a machine can merely capture limited information from the surface text of the query message (Ghazvininejad et al., 2018). Consequently, it is difficult for a machine to understand the query fully, and then to generate diverse and informative responses (Zhou et al., 2018). To bridge the gap of the knowledge between the human and the machine, researchers have begun to introduce large-scale knowledge graphs for enhancing the dialogue generation (Zhu et al., 2017; Zhou et al., 2018; Liu et al., 2018), and they have obtained lots of impressive results. Generally, the retrieval of knowledge facts is based on the entity name; in detail, the first step is to recognize entity words in the given query message, and then facts that contain the mentioned entities can be retrieved as candidates1. Subsequently, a knowledge-aware response can be generated based on the query message and previously retrieved facts. Although such a straightforward paradigm works to a certain extent, some challenges in knowledge-aware dialogue generation still keep unsolved. 1) An entity word usually can refer to different concepts, i.e., an entity has multiple meanings, but only one specific concept is involved in a particular context. Without considering this, some pre-fetched knowledge fact candidates can be irrelevant to the context. 2) Even if we only consider a particular entity meaning, the related knowledge facts may cover various target topics. However, some of those topics do not con1For example, for a mentioned entity “apple” in a query, the fact (apple, is a type of, fruit) or (fruit, related to, apple) can be retrieved. 5812 Figure 1: An illustrative example. #1 shows the response generated with a highly relevant fact, #2 shows the response generated with irrelevant facts. tribute to the dialogue generation. Figure 1 presents an illustrative example to demonstrate such two issues. Here, a subgraph is retrieved based on the entity word “Apple” in the query. In general, “Apple” can be interpreted as either a type of fruit or a brand name. In this context, it is evident that “Apple” refers to a brand name. However, some knowledge facts concerning a type of fruit are retrieved too. If a model makes an inappropriate choice of irrelevant facts, the generated response will make no sense to the query message. In our example, even for the entities in blue circle related to the brand name “Apple”, only some of them have a positive effect in the dialogue generation, e.g., “Jobs” should not make any contribution to the “#1”. 3) The integration of the knowledge and the dialogue generation in previous approaches is insufficient, including the way of integration, as well as the types of knowledge. To tackle such challenges, this paper proposes a Context Knowledge-Aware Diverse and Informative conversation generation model, ConKADI. First, we design a Felicitous Fact mechanism to help the model highlight the knowledge facts that are highly relevant to the context, that is, “Felicitous Facts”. Felicitous Fact mechanism generates a felicitous fact probability distribution over the retrieved facts. For improving the selection of felicitous facts, human-generated answers (i.e., the ground-truth responses) are used as the posterior context knowledge to supervise the training of the prior felicitous fact probability distribution. Next, Context-Knowledge Fusion is proposed to lift the role of knowledge facts in the dialogue generation, by fusing the context and the felicitous knowledge before the decoding. Last, ConKADI can generate three types of words owing to the Flexible Mode Fusion module, which aims at simultaneously fusing multiple types of knowledge. To summarize, Felicitous Fact mechanism can alleviate the first two issues, and the next two techniques solve the last issue. Consequently, our approach can improve the utilization rate of knowledge graphs, as well as can promote the diversity and informativeness of the generated responses. In the experiments, a large-scale Chinese Weibo dataset is collected and aligned with the commonsense knowledge for dialogue generation. We perform extensive evaluations on two large-scale datasets: an open-released English Reddit dataset and our proposed Chinese Weibo dataset. The experimental results demonstrate that our proposed ConKADI model significantly outperforms representative methods in knowledge utilization, diversity, and informativeness. Especially, ConKADI exceeds the latest knowledge-aware dialogue generation model, CCM (Zhou et al., 2018), in most experiments. 2 Related Work Seq2Seq (Sutskever et al., 2014; Vinyals and Le, 2015) has been widely used in the open-domain dialogue generation. However, models tend to generate generic responses (Serban et al., 2016). To tackle this issue, researchers have proposed new objectives (Li et al., 2016a), enhanced decoding algorithms (Li et al., 2016b), latent-variable based methods (Zhao et al., 2017, 2018; Gao et al., 2019). Introducing additional contents into the dialogue generation is also helpful. (Xu et al., 2019) uses meta-words; (Zhu et al., 2019) uses the retrieved existing dialogues. However, the leading cause of generating generic responses is that the model can not obtain enough background knowledge from the query message (Ghazvininejad et al., 2018; Liu et al., 2019). Recently, to alleviate the lack of background knowledge, researchers have begun to introduce the knowledge into the generation. The knowledge can be the unstructured knowledge texts (Ghazvininejad et al., 2018), the structured knowledge graphs (Zhou et al., 2018), or the hybrid of them (Liu et al., 2019). The structured knowledge has the best quality, because it is generally extracted and summarized by the human. The structured knowledge graph can be either domain-specific (Zhu et al., 2017; Liu et al., 2018) or open-domain (Young et al., 2018; Zhou et al., 2018). ConceptNet (Speer 5813 et al., 2017) is a multilingual open-domain commonsense knowledge graph, which is designed to represent the general knowledge and to improve understanding of the meanings behind the words people use. Two previous studies (Young et al., 2018; Zhou et al., 2018) have proved the feasibility of introducing commonsense knowledge into dialogue systems. The first work (Young et al., 2018) is designed for retrieval-based systems; therefore, only the current state-of-the-art CCM (Zhou et al., 2018) is our direct competitor. In comparison with CCM, 1) ConKADI is aware of the context when using the knowledge. 2) ConKADI uses human’s responses as posterior knowledge in training. In addition, our Felicitous Fact mechanism is different from the word/knowledge selection mechanisms previously proposed in related tasks; for example, selecting a cue word (Mou et al., 2016; Yao et al., 2017) or selecting a knowledge (Liu et al., 2019). First, ConKADI can access more contextual information because our model is fully end-to-end, while previous works use independent and external modules. Second, our Felicitous Fact outputs a probabilistic distribution instead of a hard singleton value, as did the previous works. 3 Approach 3.1 Task Formulation and Model Overview Formally, given a training data D of triplets, where each triplet includes a query message X = (x1, . . . , xn), a response Y = (y1, . . . , ym), and a set of commonsense knowledge facts F = {f1, . . . , fl}. The training goal of knowledgeaware dialogue generation is to maximize the probability P (X,Y,F)∈D 1 |D|p(Y |X, F); the inference goal is to find Y ∗= arg maxY p(Y |X, F). Knowledge facts F are retrieved from the knowledge graph G ; each fact is organized as a triplet (h, r, t). The overview of ConKADI has been shown in Figure 2. Knowledge fact set F is retrieved by the Knowledge Retriever given the query message X. The Context Encoder summarizes an utterance into contextual representations. The Felicitous Fact Recognizer calculates the felicitous fact probability distribution z over the F, which is used to initialize the Decoder and guide the generation. The Triple Knowledge Decoder can generate three types of words: vocabulary words, entity words, and copied words, with the Flexible Mode Fusion. 3.2 Felicitous Fact mechanism Knowledge Retriever: Given a query message X, if a word xi ∈X is recognized as an entity word and can be matched to a vertex esrc in the knowledge graph G, then, each neighbour etgt ∈Neighbour(esrc) and the corresponding directional relation r is retrieved as a candidate fact f. esrc/etgt is called as source/target entity. If a word can’t match any vertex, a special fact fNAF will be used. Context Encoder: The Context Encoder is a bidirectional GRU network (Cho et al., 2014), which reads X or Y and outputs a contextual state sequence. For simplicity, we take X as an example. At the time step t, the Encoder outputs a forward state and a backward state, the concatenation of such two states hx t = [hfw t ; hbw t ] ∈R2dh×1 is regarded as the contextual state : hfw t = GRU fw(hfw t−1, xt, ext) hbw t = GRU bw(hbw t−1, xn−t+1, exn−t+1) (1) where xt is the word embedding of xt. To enhance the semantic information, the matched entity embedding ext of xt is also involved. Finally, the contextual state sequence of X/Y is denoted as Hx/y = (hx/y 1 , . . . , hx/y n/m). Specifically, Hx is the prior context; Hy is the posterior context that is only available in the training stage. Felicitous Fact Recognizer: Recall the example illustrated in Figure 1 , some preliminary retrieved knowledge facts may be inappropriate in the dialogue context. The Felicitous Fact Recognizer is designed to detect the facts that highly coincide with the dialogue context, i.e., Felicitous Facts. The Felicitous Fact Recognizer reads the contextual information, then outputs a probability distribution z ∈Rl×1 over the F; therefore, the i-th dimension value z[i] indicates the weight of fi. In the training stage, the high-quality human-generated response Y is served as the posterior knowledge; hence, the posterior zpost is adopted in training, the prior zprior is adopted in inference: zpost = η(ϕ(F · Wft) · ϕ([hx n ⊤; hy m⊤] · Wpost))⊤ zprior = η(ϕ(F · Wft) · ϕ(hx n ⊤· Wprior))⊤ (2) where F ∈Rl×(de+dr+de) is the embedding matrix of the retrieved facts F, Wft, Wpost and Wprior are trainable parameters, η is softmax activation , 5814 Figure 2: An overview of the proposed approach ConKADI. ϕ is tanh activation. Kullback–Leibler Divergence (Kullback and Leibler, 1951) (KLD) is used to force two distributions to become as close as possible. Lk = KLD(zpost, zprior) (3) Context-Knowledge Fusion: To enhance the Decoder’s understanding of the background knowledge, the Decoder is initialized based on the fused knowledge f⊤ z = z⊤· F and the query context: hy 0 ⊤= tanh([hx n ⊤; fz⊤] · Winit) (4) where Winit is a trainable parameter. Following the previous work (Zhao et al., 2017), we adopt the Bag-of-Words Loss to ensure the accuracy of the input of the Context-Knowledge Fusion, namely, hx n and fz. Meanwhile, we construct a 0-1 indicator vector If ∈Rl×1 to supervise the training of zpost, where If[i] is set to 1 if the target entity of the i-th fact fi appears in the Y , otherwise 0. Thus, the objective is to minimize the Lf given by: − P yb∈B log pb(yb|hx n, fz)) |B| −If ⊤· log(zpost) |If| (5) where B is the word bag of Y , pb is a 2-layer MLP bow activated by softmax, which outputs the probability distribution over the vocabulary V . 3.3 Triple Knowledge Decoder The Decoder is another GRU network. At each time step, the Decoder can generate one of three types of words: vocabulary words, knowledgeable entity words, and copied words. ConKADI first updates the internal state: hy t = g(hy t−1, ut−1, ct−1) (6) where ut−1⊤= [y⊤ t−1; e⊤ yt−1; hx yt−1 ⊤], and yt−1, eyt−1, hx yt−1 are the word embedding, the entity embedding and the pointed-then-copied source state of the last predicted token yt−1, respectively; and ct−1 is the Attention 2. Vocabulary Words: The probability distribution pw,t ∈R|V |×1 over the V is given by: pw,t⊤= η(elu([hy t ⊤; ut−1⊤; ct⊤]·Wv1)·Wv2) (7) where Wv1/2 are trainable parameters, and the non-linear activation elu is proposed by (Clevert et al., 2016). Knowledgeable Entity Words: An entity word can be generated by extracting the target entity of the best-matched fact f at each time step. The corresponding probability distribution pk,t ∈Rl×1 over the F is calculated as: zd,t = η(ϕ(F · Wfd) · ϕ([hy t ⊤; ut−1⊤] · Wd)⊤) γt = sigmoid([hy t ⊤; ut⊤; ct⊤] · Wgate) ∈R1 pk,t = γt × z + (1.0 −γt) × zd (8) 2We have omitted the description of Attention. Please see (Luong et al., 2015) for the detail. 5815 where the previous z here serves as a static global distribution (denoted as GlFact), zd,t is the dynamic distribution, and γt is a gate to control the contribution of each distribution. Copied Words: The Decoder can further point out a word x from X, and then copies the x . The corresponding probability distribution pc,t ∈Rn×1 over the query message X is calculated as: pc,t = η(ϕ(Hx · Wcs) · ϕ(uc t ⊤· Wct)⊤) uc t ⊤= [hy t ⊤; ut−1⊤; ct⊤] (9) Flexible Mode Fusion: Previous three distributions can be fused by the MF(hy t , ut−1, ct), a 2-layer MLP activated by softmax. MF can outputs a probability distribution (γw,t, γk,t, γc,t) over three modes at each time step: pout,t = γw,t×pw,t+γk,t×pk,t+γc,t×pc,t (10) The proposed MF can be regarded as a multiclass classifier; therefore, the advantage of MF is the flexibility, we can additionally integrate more modes or remove existing modes by simply changing the number of classes. For a more reasonable fusion, the Cross-Entropy between the ground-truth mode and the predicted distribution by MF is used to supervise the training; the corresponding CrossEntropy loss is denoted as Lm. Next, we optimize the fused output distribution pout(Y |X, F) by minimizing the Ln, which is given by: − X t λt log pout,t(yt|yt−1:1, X, F) + Lm 2 (11) where λt is a normalization term to penalize the out-of-vocabulary words, λt = 1 #(unk∈Y ) 3 if yt is an unk, otherwise λt = 1. Training Objective: Finally, the ConKADI can be trained by minimizing the following objective: L = Ln + Lk + Lf (12) 4 Experiments 4.1 Dataset To verify the generalization among different languages, we evaluate models not only on a public English Reddit dataset (Zhou et al., 2018), but we also collect and construct a Chinese Weibo dataset. Both datasets are aligned with the commonsense knowledge graph ConcetNet (conceptnet.io), the statistics have been reported in Table 1. 3#(·) is the count of · Reddit Weibo #Train 1,352,961 1,019,908 #Dev/#Test 40,000 56,661 #Vocab 30,000 50,000 Batch Size 100 50 #Entity/#Relation 21,471/44 27,189/26 #Fact 149,803 696,466 Table 1: The statistics of two datasets. The English Reddit: We did some filtering on the raw data: Utterances that are too short (< 4 words) or too long (> 30 words) were removed, and each message can be associated with at most 300 related fact triplets. The Chinese Weibo: We first collected three open-sourced Weibo (weibo.com) datasets, which originally contained 4.44M (Shang et al., 2015), 1.96M (Ke et al., 2018) and 10.48M (Li and Yan, 2018) pairs of dialogue, respectively. Jieba4 was used to segment; utterances that are too short/long were removed as well. Next, we crawled 4.48M entities and 13.98M facts from the ConceptNet. Stop entities, and low-frequent entities are excluded. For a dialogue pair, if one entity in the message and another entity in the response can be connected by a 1-hop edge in the knowledge graph, this dialogue was kept. In comparison with the English Reddit, our dataset has more facts, but the relation types are quite limited; hence, we set the limit that a message can be associated with at most 150 fact triplets. For two datasets, the embedding of entities and relations are learned by using TransE (Bordes et al., 2013); then, they are kept fixed in training. Our experimental resources are available at the web 5. 4.2 Settings Baselines: The widely used S2S (Sutskever et al., 2014), and its Attentive version ATS2S (Luong et al., 2015). We further add the bidi-MMI (Li et al., 2016a) or the diverse decoding (Li et al., 2016b) to improve the diversity of ATS2S, which are denoted as ATS2SMMI and ATS2SDD6. Copy mechanism (Gu et al., 2016; Vinyals et al., 2015) allows Decoder to point then copy a source word. GenDS is a knowledge-aware model, which can generate responses with the utilizing of entity words. (Zhu et al., 2017). CCM is the current state-of-the-art approach in the task of response generation with 4https://pypi.python.org/pypi/jieba/ 5https://github.com/pku-orangecat/ ACL2020-ConKADI 6The best k was searched form [0.1, 3.0]. 5816 Entity Score Embedding Overlap (%) Diversity (%) Informativeness R-Score Metric Ematch Euse Erecall Embavg Embex BLEU-2 BLEU-3 Distinct-1 Distinct-2 Entropy Ra Rg Chinese Weibo S2S 0.33 0.58 13% 0.770 0.500 2.24 0.80 0.21 1.04 6.09 0.78 0.75 ATS2S 0.33 0.59 12% 0.767 0.513 1.93 0.69 0.27 1.23 5.99 0.77 0.75 ATS2SMMI 0.40 0.74 15% 0.773 0.528 4.01 1.61 0.75 3.91 7.49 1.24 1.21 ATS2SDD1.5 0.35 0.62 13% 0.780 0.542 2.14 0.86 1.03 4.86 7.62 1.16 1.10 Copy 0.33 0.68 13% 0.786 0.501 2.28 0.84 0.59 2.18 6.13 0.92 0.91 GenDS 0.75 0.84 26% 0.789 0.524 2.09 0.73 0.30 1.66 5.89 0.94 0.91 CCM 0.99 1.09 28% 0.786 0.544 3.26 1.20 0.48 2.59 6.16 1.18 1.15 AVG 0.49 0.74 17% 0.779 0.522 2.56 0.96 0.52 2.50 6.48 1.00 1.00 ConKADI 1.48 2.08 38% 0.846 0.577 5.06 1.59 3.26 23.93 9.04 2.98 2.24 ConKADI−cp 1.60 1.89 38% 0.833 0.567 5.00 1.52 2.34 18.29 8.75 2.55 2.08 English Reddit S2S 0.41 0.52 4% 0.868 0.837 4.81 1.89 0.38 1.77 7.59 0.82 0.78 ATS2S 0.44 0.59 5% 0.863 0.831 4.50 1.81 0.82 3.44 7.62 0.92 0.91 ATS2SMMI 0.45 0.65 6% 0.858 0.825 4.95 2.13 0.75 3.22 7.62 0.95 0.94 ATS2SDD0.3 0.31 0.43 4% 0.830 0.784 1.70 0.75 0.97 3.50 7.47 0.77 0.72 Copy 0.13 0.67 9% 0.868 0.841 5.43 2.26 1.73 8.33 7.87 1.19 1.09 GenDS 1.13 1.26 13% 0.876 0.851 4.68 1.79 0.74 3.97 7.73 1.14 1.10 CCM 1.08 1.33 11% 0.871 0.841 5.18 2.01 1.05 5.29 7.73 1.21 1.18 AVG 0.55 0.77 7% 0.860 0.829 4.40 1.79 0.94 4.32 7.69 1.00 1.00 ConKADI 1.24 1.98 14% 0.867 0.852 3.53 1.27 2.77 18.78 8.50 1.76 1.46 ConKADI−cp 1.41 1.73 13% 0.865 0.855 3.09 1.07 2.29 16.70 8.68 1.63 1.37 Table 2: Objective Experimental Results. The ablation ConKADI−cp removes the ability to copy source words. commonsense knowledge (Zhou et al., 2018). Implementation: We implemented all models except CCM, CCM was tested based on its official code7. Most hyper-parameters are kept the same as CCM, and hyper-parameters among models are kept the same as possible. In detail, the word embedding dimension is 300, Encoder is a 2-layer bidirectional GRU with 512 units, and Decoder is a 2-layer GRU with 512 units. Adam is used to optimizing model with an initial learning rate lr = 0.0001; if perplexity begins to increase, the lr will be halved, if perplexity increases in two continuous epochs, the training will be stopped. Following the CCM, the maximum epoch number is 20. Objective Metrics: We evaluate the generated responses from four aspects: Knowledge Utilization (A1): Ematch is the averaged number of the matched target entities per generation. (Zhou et al., 2018). Euse further counts the source entities. Erecall is the ratio of recalled entities. Embeddingbased Relevance (A2a) : Following (Liu et al., 2016), we use the Embavg that considers the averaged word embedding, and the Embex that considers each dimension’s extreme value. Overlappingbased Relevance (A2b) : BLEU-2/3 (Tian et al., 2017; Wu et al., 2017). Diversity (A3): We report 7CCM doesn’t support beam-search, so we use the greedy search except ATS2SMMI and ATS2SDD use beam=10. the ratio of distinct uni/bi-grams, i.e., Distinct-1/2, in all generated texts (Li et al., 2016a; Wu et al., 2018). Informativeness (A4): We report the wordlevel Entropy (Mou et al., 2016). Relative Score: To illustrate the comprehensive performance of models, we first compute the average score of 7 baselines metric by metric (AVG), then, we report the arithmetic mean score: Ra = 1 5 X Ai ( 1 |Ai| X m∈Ai mj mj,AV G ) (13) and the geometric mean score: Rg = ( Y Ai ( Y mj∈Ai mj mj,AV G ) 1 |Ai| ) 1 5 (14) 4.3 Experimental Results The objective evaluation results on the two datasets have been reported in Table 2. By reviewing the Relative Score, it can be seen that the overall performance of ConKADI outperforms baseline models. More specifically, our ConKADI outperforms baseline models in terms of all metrics except BLEU-3 on the Chinese Weibo, and our ConKADI outperforms baseline models in terms of almost all metrics on the English Reddit. In comparison with the state-of-the-art method CCM, our ConKADI increases the overall performance by 153%/95% (arithmetic/geometric mean) on the Chinese dataset, as well as increases the overall performance by 48%/25% on the English dataset. 5817 Knowledge Utilization: By accessing the knowledge, three knowledge-aware models, i.e., GenDS, CCM, and ConKADI, can significantly outperform other models. In comparison with GenDS and CCM, the advantages of ConKADI can be summarized as 1) ConKADI has a higher utilization of the knowledge, which can be proved by Ematch. 2) By using the point-then-copy mechanism (ConKADI vs. ConKADI−cp), ConKADI further expands the total generated entity number (Euse). After adding the point-then-copy mechanism, while the Ematch drops by 7.5%, the overall Euse increases by 10%. It means ConKADI can reasonably decide whether to use a knowledge fact or copy a source word. 3) ConKADI is more potential to find out the accurate knowledge; hence, our Erecall is much higher than the Erecall of GenDS and CCM. Such results can demonstrate that the proposed Felicitous Fact mechanism can help the model better focus on the facts that are relevant to the dialogue context, and increase the utilization rate of the knowledge graph and the accuracy of the knowledge selection. Diversity and Informativeness: Generative models have been suffering from generating responses without enough diversity and informativeness. Although previous GenDS and CCM can utilize the knowledge, they fail to solve this challenge; they even can be beaten by other baselines. By contrast, our ConKADI has significantly alleviated this issue. According to our ablation experiments, such notable promotion can be attributed to the proposed Context-Knowledge Fusion. The more detail will be discussed in the ablation study. Relevance: On the Chinese dataset, ConKADI has the best overall performance, but ConKADI’s performance is not ideal on the English dataset. First, we think the reason is the inherent difference of datasets; two datasets are collected from different sources and have varying densities of entityrelations (see Table 1). Next, we must emphasize these metrics can only evaluate the relevance to the given reference. Instead of the 1-to-1 mapping, the dialogue is undoubtedly a 1-to-n mapping; therefore, these results cannot show the generation is not consistent with the query. ConKADI is a very diverse model; only use one reference to judge is unfair. Similarly, this limitation has been found and explained in a recent work (Gao et al., 2019). 4.4 Human Annotation ConKADI Appropriateness Informativeness vs. Win Tie Lose Win Tie Lose ATS2S 71.3% 11.0% 17.7 % 87.3% 6.9% 5.8% ATS2SMMI 59.3% 9.2% 31.5% 82.5% 7.3% 10.2% Copy 71.7% 8.8% 19.5% 89.7% 3.8% 6.5% GenDS 87.2% 7.3% 5.5% 93.8% 2.3% 3.5% CCM 83.8% 6.9% 9.3% 93.0% 3.5% 3.5% Table 3: Human annotation results on the Chinese Weibo. ConKADI significantly (sign test, p-value < 0.005, ties are removed) outperforms other baselines in terms of both appropriateness and informativeness. Following (Liu et al., 2019), we randomly sample 200 query messages from the test set, and then we conduct the pair-wise comparison. For the variations of S2S, We remain two most representative models, ATS2S and ATS2SMMI. Thus, we have 1,000 pairs in total. For each pair, we invite three well-educated volunteers to judge which response is better, in terms of the following two metrics: 1) Appropriateness, which mainly considers the fluency and the logical relevance. 2) Informativeness, which considers whether the model provides new information/knowledge or not. The tie is allowed, but volunteers are required to avoid it as possible. The model names are masked, and the A-B order is random. For the appropriateness, 2/3 agreement (i.e., the percentage of cases that at least 2 volunteers give the same label) is 95%, and the 3/3 agreement is 67.1%. For the informativeness, 2/3 agreement is 97%, and the 3/3 agreement is 79.1%. The results have been reported in Table 3. ATS2SMMI is the strongest baseline owing to the beam search and the MMI re-ranking, especially in terms of appropriateness. While the generation of ATS2SMMI is more generic, it’s friendly for human reading; hence, it tends to receive higher scores. GenDS and CCM are far behind our model. We find their generation is usually not fluent, while a lot of entities are generated. Comparing two metrics, ConKADI has more notable advantages in terms of informativeness. 4.5 Ablation Study We focus on the ablation of the Felicitous Fact mechanism. There are 3 factors, GlFact (using the distribution z to guide the entity word generation), CKF (Context-Knowledge Fusion), and CKF’s loss Lf. Copy has fully removed the Felicitous Fact mechanism (i.e., above 3 factors); Base further 5818 Query #1:My cat likes bananas and bread. #2:Yeah , but what website? #3:我会唱霉霉的歌。 I can sing the song of Taylor Swift ATS2S I’m a banana and I don’t know what you’re talking about. I’m not sure. I’m just curious. 我也是,我唱的是unk。 Me too. I’m singing unk。 ATS2SMMI Do you have a cat? It’s a site site. 你唱的是哪种歌? What kind of song are you singing? Copy I’m a cat. I’m not sure what site you’re talking about. 我也是,我也是,我也是,我也喜欢。 Me too, me too, me too, I like it。 GenDS I’m a banana. I’m not sure , but I’m not sure if it’s a link to the original post. I’m not sure what the site is. 你可以听我唱的唱。 You can listen to my singing singing. CCM I’m a banana and I love my cat. I’m not sure, I just got a link to the site. 我也是,我也喜欢,听着歌着 歌听着歌听着歌 Me too. I like it, too. Listening to songs. Listening to songs. Listening to songs ConKADI And your cat is the best. Looks like Youtube, the site is blocked. 我听了,他的音乐好听。 I heard it. His music is good. Table 4: Case Study: #1 #2 are sampled from the English Reddit, #3 is sampled from the Chinese Weibo. removes the ability to copy source words. # Settings Euse Distinct-2 Entropy Rg #1 Copy+GlFact+CKF+Lf 2.08 23.93 9.04 2.24 #2 Base+GlFact+CKF+Lf 1.89 18.29 8.75 2.02 #3 Copy+GlFact+CKF 1.79 18.18 8.73 2.08 #4 Base+GlFact+CKF 1.92 17.38 8.87 2.01 #5 Base+CKF 1.87 15.72 8.66 1.96 #6 Base+GlFact 1.05 2.90 6.31 1.10 #7 Base 1.06 2.50 6.46 1.10 Table 5: Ablation study on the Chinese Weibo. The results have been reported in Table 5. 1) The performance drops significantly without using the context-knowledge fused result to initialize the Decoder (#5 −→#7), indicating that CKF is very important for the Decoder. 2) If GlFact is adopted solely, it can affect performance in turn. 3) Lf is essential to the Copy in comparison with Base. Analysis of KL Divergence: The training stage introduces posterior knowledge, which is absent during the inference. Therefore, reducing the difference between such two distribution is very necessary. We here check the curve of the KLD between the zprior and zpost, i.e., Lk . A lower Lk means the two distribution are closer. As shown in Figure 3: 1) KLD is strongly related to the overall performance. 2) The importance that using the fused knowledge to initialize the Decoder (CKF) has been proved once again (#5 vs. #6). 4.6 Case Study Three cases are sampled in Table 4. In case 1, except ATS2SMMI and our ConKADI, the remaining models have generated weird responses. ATS2SMMI generated a fluent response, but this reFigure 3: The Kullback–Leibler Divergence between the between the zprior and zpost on Chinese Weibo against the training iteration number. sponse is not very logically relevant to the query. In case 2, although GenDS and CCM have generated entity words, they also generate some redundant generic patterns, namely, ”I’m not sure ...”. It is perhaps because their understanding of background knowledge is still not enough. Our ConKADI generates a fluent and informative response. The last challenging case is sampled from the Chinese dataset. ”Taylor Swift” is a female singer, but it is an unknown word for models. All generated responses are not absolutely perfect. Only the generations of ATS2SMMI and ConKADI are fluent. In comparison with ATS2SMMI, the generation of ConKADI provides more information; the only small flaw is ConKADI wrongly thinks ”Taylor Swift” is a male singer. 5 Conclusion and Future Work To bridge the gap of the knowledge between machines and human beings in the dialogue genera5819 tion, this paper proposes a novel knowledge-aware model ConKADI. The proposed Felicitous Fact mechanism can help the ConKADI focus on the facts that are highly relevant to the dialogue context, by generating a felicitous fact probability distribution over the retrieved facts. Besides, the proposed Context-Knowledge Fusion and Flexible Mode Fusion can facilitate the integration of the knowledge in the ConKADI. Extensive evaluations over both an open-released English dataset and our constructed Chinese dataset demonstrate our ConKADI can significantly outperform the stateof-the-art model CCM and other baselines in most experiments. Although ConKADI has achieved a notable performance, there is still much room to improve. 1) While ATS2SMMI is behind our ConKADI, we find MMI can effectively enhance the ATS2S; hence, in the future, we plan to verify the feasibility of the re-ranking technique for knowledge-aware models. 2) We will continue to promote the integration of high-quality knowledge, including more types of knowledge and a more natural integration method. Acknowledgments This work is supported by the National Key R&D Program of China (Grant No. 2017YFB1002000), and PKU-Tencent Joint Innovation Research Program. Our deepest gratitude goes to the reviewers for their thoughtful suggestions, and we need to thank all our team members in FineLab for their help. References Antoine Bordes, Nicolas Usunier, Jason Weston, and Oksana Yakhnenko. 2013. Translating Embeddings for Modeling Multi-Relational Data. In Advances in NIPS, volume 26, pages 2787–2795. Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang. 2017. A Survey on Dialogue Systems: Recent Advances and New Frontiers. ACM SIGKDD Explorations Newsletter, 19. Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder– Decoder for Statistical Machine Translation. empirical methods in natural language processing, pages 1724–1734. Djork-Arn´e Clevert, Thomas Unterthiner, and Sepp Hochreiter. 2016. Fast and accurate deep network learning by exponential linear units (elus). In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. Jun Gao, Wei Bi, Xiaojiang Liu, Junhui Li, and Shuming Shi. 2019. Generating multiple diverse responses for short-text conversation. In The ThirtyThird AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 6383–6390. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5110–5117. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating Copying Mechanism in Sequence-to-Sequence Learning. Pei Ke, Jian Guan, Minlie Huang, and Xiaoyan. 2018. Generating Informative Responses with Controlled Sentence Function. Proceedings of ACL, pages 1499–1508. S. Kullback and R. A. Leibler. 1951. On information and sufficiency. The Annals of Mathematical Statistics, 22(1):79–86. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A Diversity-Promoting Objective Function for Neural Conversation Models. north american chapter of the association for computational linguistics, pages 110–119. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016b. A simple, fast diverse decoding algorithm for neural generation. CoRR, abs/1611.08562. Juntao Li and Rui Yan. 2018. Overview of the NLPCC 2018 shared task: Multi-turn human-computer conversations. In Natural Language Processing and Chinese Computing - 7th CCF International Conference, NLPCC 2018, Hohhot, China, August 26-30, 2018, Proceedings, Part II, pages 446–451. Chia-Wei Liu, Ryan Lowe, Iulian V. Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation. Shuman Liu, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu, and Dawei Yin. 2018. Knowledge 5820 diffusion for neural dialogue generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018 , Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1489–1498. Zhibin Liu, Zheng-Yu Niu, Hua Wu, and Haifeng Wang. 2019. Knowledge aware conversation generation with explainable reasoning over augmented graphs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1782–1792, Hong Kong, China. Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective Approaches to Attention-based Neural Machine Translation. empirical methods in natural language processing. Lili Mou, Yiping Song, Rui Yan, Ge Li, Lu Zhang, and Zhi Jin. 2016. Sequence to Backward and Forward Sequences: A Content-Introducing Approach to Generative Short-Text Conversation. pages 3349– 3358. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C. Courville, and Yoshua Bengio. 2016. A hierarchical latent variable encoder-decoder model for generating dialogues. CoRR, abs/1605.06069. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural Responding Machine for Short-Text Conversation. In Annual Meeting of the Association for Computational Linguistics, pages 1577–1586. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 4444–4451. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. neural information processing systems, pages 3104– 3112. Zhiliang Tian, Rui Yan, Lili Mou, Yiping Song, Yansong Feng, and Dongyan Zhao. 2017. How to Make Context More Useful? An Empirical Study on Context-Aware Neural Conversational Models. pages 231–236. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer Networks. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. Computer Science. Sixing Wu, Dawei Zhang, Ying Li, Xing Xie, and Zhonghai Wu. 2018. HL-EncDec: A Hybrid-Level Encoder-Decoder for Neural Response Generation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 845–856. Yu Wu, Wei Wu, Dejian Yang, Can Xu, Zhoujun Li, and Ming Zhou. 2017. Neural Response Generation with Dynamic Vocabularies. Can Xu, Wei Wu, Chongyang Tao, Huang Hu, Matt Schuerman, and Ying Wang. 2019. Neural response generation with meta-words. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019 , Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5416–5426. Lili Yao, Yaoyuan Zhang, Yansong Feng, Dongyan Zhao, and Rui Yan. 2017. Towards implicit contentintroducing for generative short-text conversation systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 911, 2017, pages 2190–2199. Tom Young, Erik Cambria, Iti Chaturvedi, Hao Zhou, Subham Biswas, and Minlie Huang. 2018. Augmenting end-to-end dialogue systems with commonsense knowledge. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence, (AAAI-18) , New Orleans, Louisiana, USA, February 2-7, 2018, pages 4970–4977. Tiancheng Zhao, Kyusong Lee, and Maxine Esk´enazi. 2018. Unsupervised discrete sentence representation learning for interpretable neural dialog generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018 , Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers. Tiancheng Zhao, Ran Zhao, and Maxine Esk´enazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 654–664. Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018. Commonsense Knowledge Aware Conversation Generation with Graph Attention. In Proceedings of the TwentySeventh International Joint Conference on Artificial Intelligence, pages 4623–4629, California. International Joint Conferences on Artificial Intelligence Organization. Qingfu Zhu, Lei Cui, Weinan Zhang, Furu Wei, and Ting Liu. 2019. Retrieval-enhanced adversarial training for neural response generation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019 , Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 3763–3773. Wenya Zhu, Kaixiang Mo, Yu Zhang, Zhangbin Zhu, Xuezheng Peng, and Qiang Yang. 2017. Flexible end-to-end dialogue system for knowledge grounded conversation. CoRR, abs/1709.04264.
2020
515
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5821–5831 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5821 Generate, Delete and Rewrite: A Three-Stage Framework for Improving Persona Consistency of Dialogue Generation Haoyu Song1, Yan Wang2, Wei-Nan Zhang1∗, Xiaojiang Liu2, Ting Liu1 1Research Center for Social Computing and Information Retrieval Harbin Institute of Technology, Heilongjiang, China 2 Tencent AI Lab, Shenzhen, China {hysong,wnzhang,tliu}@ir.hit.edu.cn {brandenwang,kieranliu}@tencent.com Abstract Maintaining a consistent personality in conversations is quite natural for human beings, but is still a non-trivial task for machines. The persona-based dialogue generation task is thus introduced to tackle the personalityinconsistent problem by incorporating explicit persona text into dialogue generation models. Despite the success of existing personabased models on generating human-like responses, their one-stage decoding framework can hardly avoid the generation of inconsistent persona words. In this work, we introduce a three-stage framework that employs a generate-delete-rewrite mechanism to delete inconsistent words from a generated response prototype and further rewrite it to a personality-consistent one. We carry out evaluations by both human and automatic metrics. Experiments on the Persona-Chat dataset show that our approach achieves good performance. 1 Introduction In an open-domain conversation scenario, two speakers conduct open-ended chit-chat from the initial greetings and usually come to focus on their characteristics, such as hobbies, pets, and occupations, etc., in the course of the conversation. For humans, they can easily carry out conversations according to their personalities (Song et al., 2019a), but fulfilling this task is still a challenge for recent neural dialogue models (Welleck et al., 2019). One main issue is that these models are typically trained over millions of dialogues from different speakers, and the neural dialogue models have a propensity to mimic the response with the maximum likelihood in the training corpus (Li et al., 2016b), which results in the frequent inconsistency in responses (Zhang et al., 2018). Another issue ∗This work was done when the first author was an intern at Tencent AI Lab. Wei-Nan Zhang is the corresponding author. I’m a recording engineer; I live in California Hi, Kevin here. I love Mexican food. Hi I am Tom. I am in Colorado. Where do you live? Hi I am Tom. <mask> <mask> <mask><mask>. Where do you live? Hi I am Tom. I’m an engineer in California. Where do you live? Generate Delete Rewrite Inconsistent Personality Stage 1: Stage 2: Stage 3: Query: Persona: Figure 1: A common problem for persona-based dialogue models is that they can hardly avoid the generation of inconsistent persona words. Although the model generates a response which looks good, it is an inconsistent one. With further rewriting, the model can focus more on improving persona consistency. is the user-sparsity problem (Qian et al., 2017) in conventional dialogue corpora (Serban et al., 2015). Some users have very few dialogue data, which makes it difficult for neural models to learn meaningful user representations (Li et al., 2016b). To alleviate the above issues, Zhang et al. (2018) introduced the Persona-Chat dataset to build more consistent dialogue models. Different from conventional dialogue corpora, this dataset endows dialogue models with predefined personas, which is in the form of textually described profile (as shown in the first line of Figure 1). The persona-based dialogue models also adopt an encoder-decoder architecture and are enhanced with persona encoding components, such as memory network (Sukhbaatar et al., 2015) and latent variable (Kingma and Welling, 2013). These models turn out to produce more consistent responses than the persona-free ones (Zhang et al., 2018; Song et al., 2019a). Despite the successful application of the encoderdecoder framework in persona-based dialogue models, one concern is that they lack extra attention to the key persona information. The model will learn 5822 to minimize the overall loss of every decoded word, but this may lead to the neglect of the key personas: change of one persona-related word may not significantly affect the overall loss, but could turn a good response into a totally inconsistent one. As shown in Stage 1 of Figure 1, only one improper word “Colorado” leads the response to be inconsistent. A desirable solution should be able to capture personas and automatically learn to avoid and refine inconsistent words before the response. In this paper, we present a Generate-Delete-Rewrite framework, GDR, to mitigate the generation of inconsistent personas. We design three stages specifically for the goal of generating persona consistent dialogues: The first Generate stage adopts a transformer-based generator to produce a personabased response prototype. The second Delete stage employs a consistency matching model to identify inconsistencies and delete (by masking) the inconsistent words from the prototype. Finally, in the Rewrite stage, a rewriter polishes the masked prototype to be more persona consistent. To examine the effectiveness of our GDR model, we carried out experiments on the public available Persona-Chat dataset (Zhang et al., 2018). We summarize the main contributions as follows: • A three-stage end-to-end generative framework, GDR, was proposed for the generation of persona consistent dialogues. • A matching model was integrated into the generation framework to detect and delete inconsistent words in response prototype. • Experimental results show the proposed approach outperforms competitive baselines on both human and automatic metrics. 2 Related Work End-to-end dialogue generation approaches are a class of models for building open-domain dialogue systems, which have seen growing interests in recent years (Vinyals and Le, 2015; Shang et al., 2015; Serban et al., 2016; Li et al., 2016c; Zhao et al., 2017; Li et al., 2017). These dialogue models adopted recurrent units in a sequence to sequence (seq2seq) fashion (Sutskever et al., 2014). Since the transformer has been shown to be on par with or superior to the recurrent units (Vaswani et al., 2017), some dialogue models began to take advantage of this architecture for better dialogue modeling (Dinan et al., 2018; Su et al., 2019). Besides the advancements in dialogue models, the emergence of new dialogue corpus has also contributed to the research field. Zhang et al. (2018) introduced the Persona-Chat dataset, with explicit persona texts to each dialogue. Based on seq2seq model and memory network, they further proposed a model named Generative Profile Memory Network for this dataset. Following this line, Yavuz et al. (2019) designed the DeepCopy model, which leverages copy mechanism to incorporate persona texts. Song et al. (2019a) integrated persona texts into the Per-CVAE model for generating diverse responses. However, the persona-based models still face the inconsistency issue (Welleck et al., 2019). To model the persona consistency, Welleck et al. (2019) annotated the Persona-Chat dataset and introduced the Dialogue Natural Language Inference (DNLI) dataset. This dataset converts the detection of dialogue consistency into a natural language inference task (Bowman et al., 2015). Personalized dialogue generation is an active research field (Li et al., 2016b; Qian et al., 2017; Zhang et al., 2018; Zheng et al., 2019a,b; Zhang et al., 2019). In parallel with this work, Song et al. (2019b) leveraged adversarial training to enhance the quality of personalized responses. Liu et al. (2020) incorporated mutual persona perception to build a more explainable (Liu et al., 2019) dialogue agent. Other relevant work lies in the area of multi-stage dialogue models (Lei et al., 2020). Some retrieval-guided dialogue models (Weston et al., 2018; Wu et al., 2019; Cai et al., 2019a,b; Su et al., 2020) also adopted a multi-stage framework, but the difference from our work is obvious: we generate the prototype rather than retrieve one. A high-quality retrieved response is not always available, especially under the persona-based setting. 3 Model 3.1 Overview In this work, we consider learning a generative dialogue model to ground the response with explicit persona. We focus on the persona consistency of single-turn responses, and we leave the modeling of multi-turn persona consistency as future work. Formally, we use uppercase letters to represent sentences and lowercase letters to represent words. Let Q = q1, q2, ..., qn denotes the input query with n words, and let P = {P (1), P (2), ..., P (k)} 5823 Masked Prototype self-attention feed forward feed forward masked self-attention persona attn query attn feed forward query persona targets (shifted right) NG x NG x encoded query encoded persona Response Prototype response prototype persona (1) Generate self-attention feed forward feed forward self-attention (2) Delete targets (shifted right) Output Response (3) Rewrite ND x ND x NR x persona-query attention masked self-attention persona attention feed forward prototype attention encoded persona encoded masked prototype Figure 2: The overall architecture of our three-stage GDR model, including a prototype generator (Generate stage), a consistency matching model (Delete stage), and a masked prototype rewriter (Rewrite stage). The italics denote the inputs of each stage, and the boldfaces denote the outputs. All the attentions (attn) here refer to the multi-head attention. For the sake of brevity, we omitted some layers of the transformer in this figure. be the k different persona texts, where P (i) = p(i) 1 , p(i) 2 , ..., p(i) mi is the i-th persona text with mi words. Our goal is to learn a dialogue model M to generate a response ˆY = y1, y2, ..., yk, which is consistent with the persona, based on both query Q and persona P. In abbreviation, ˆY = M(Q, P). More concretely, as shown in Figure 2, the proposed model M consists of three parts: 1) Prototype generator G. This component takes persona texts and query as input and generates a response prototype for further editing. It adopts an encoder-decoder architecture (Sutskever et al., 2014), with the transformer (Vaswani et al., 2017) applied in both the encoder and the decoder. 2) Consistency matching model D. This model is designed to detect and delete those words in the prototype that could lead to inconsistency. We train this model in a natural language inference fashion on the DNLI (Welleck et al., 2019) dataset. 3) Masked prototype rewriter R. The rewriter learns to rewrite the response prototype to a more consistent one. It is also a transformer decoder, which adopts a similar architecture as the decoder of G. The difference lies in that it takes the masked prototype, rather than the query, as input. 3.2 Generate: Prototype Generator We apply the encoder-decoder structure to build our prototype generator G. For the encoder, we use the self-attentive encoder in the transformer. For the decoder, built upon the transformer decoder, we propose a tuple-interaction mechanism to model the relations among persona, query, and response. Self-Attentive Encoder As the persona P is composed of several sentences, we unfold all words in P into a sequence p(1) 1 , p(1) 2 , ..., p(i) mj, ..., p(k) mk. Then we use the self-attentive encoder (Vaswani et al., 2017) to compute the representations of the persona texts and query separately. The multi-head attention is defined as MultiHead(Q, K, V ), where Q,K,V are query, key, and value, respectively. The encoder is composed of a stack of NG identical layers. Take the first stack encoding of P for example: V(1) p = MultiHead(I(P), I(P), I(P)), (1) O(1) p = FFN(V(1) p ), (2) FFN(x) = max(0, xW1 + b1)W2 + b2, (3) where V(1) is the first layer result of the multi-head self-attention and I(·) is the embedding function of the input. The input embedding for word wi is the sum of its word embedding and position embedding. O(1) denotes the output of the first layer feed-forward network. For other layers: V(n) p = MultiHead(O(n−1) p ), O(n−1) p ), O(n−1) p ), (4) O(n) p = FFN(V(n) p ), (5) where n =2,...,NG. We applied layer normalization to each sublayer by LayerNorm(x + Sublayer(x)). Q is encoded in the same way. After NG identical layers, we can get the final representations O(NG) p and O(NG) q , where O(NG) p and O(NG) q are the encoded persona and encoded query, respectively. 5824 Tuple-Interaction Decoder In the decoding phase, there are three types of information, persona P, query Q, and response Y , which make up a tuple (P,Q,Y ). Accordingly, three inter-sentence relations need to be considered: (1) The alignment between Q and Y is beneficial to yield better results (Bahdanau et al., 2014). (2) As the persona is composed of several sentences and describes different aspects, we need to find the most relevant persona information according to the relations between P and Y. (3) We also want to know whether the query needs to be answered with the given persona. Thus we should take the relations between P and Q into account. Considering the above factors, we design a twolayer tuple-interaction mechanism in the decoder, as shown in the first part of Figure 2. There are three attentions in two layers: query attention (Q-Attn) and persona attention (P-Attn) in the first layer, and persona-query attention (PQ-Attn) in the second layer. NG such identical layers compose of the decoder. For the first layer: V(1) y = MultiHead(I(Y ), I(Y ), I(Y )), (6) E(1) = MultiHead(V(1) y , O(NG) p , O(NG) p ), (7) F(1) = MultiHead(V(1) y , O(NG) q , O(NG) q ), (8) T(1) = MultiHead(E(1), F(1), F(1)), (9) O(1) dec = FNN(mean(E(1), F(1), T(1))), (10) where E(1) and F(1) are the results of the first layer P-Attn and Q-Attn. T(1) is the result of the first layer PQ-Attn. O(1) dec denotes the first layer output. Note that the Y here is masked to ensure depending only on the known words (Vaswani et al., 2017). Repeatedly, for other layers: V(n) y = MultiHead(O(n−1) dec ), O(n−1) dec ), O(n−1) dec ), (11) O(n) dec = FNN(mean(E(n), F(n), T(n))), (12) where n =2,...,NG. After NG layers, the decoder output O(NG) dec is projected from hidden size to vocabulary size, then followed up by a softmax function to get the words’ probabilities: Prob(1) = SoftMax(O(NG) dec W3 + b3), (13) where W3 is a hidden size×vocabulary size weight matrix and b3 is the bias term with vocabulary size dimension. And Prob(1) denotes the output distribution of the first stage. Now we can get the response prototype ˆY (1) from the Prob(1). … I love my cats Persona 1. I have nine dogs 2. I am a recording engineer Response Hi there, I am Peter and I love my cats self-attentive encoder self-attentive encoder h1 h2 hn … h1 h2 hm … cross attention weighted sum weighted sum p p · r p - r r multi-layer perceptron [ Entailment, Neutral, Contradiction ] Encoding Attention Matching … I love my ___ delete Figure 3: The architecture of our consistency matching model. “·” and “−” denote element-wise product and difference. The dotted line shows inference process, including consistency matching and word deleting. 3.3 Delete: Consistency Matching Model The goal of the consistency matching model D is to reveal word-level consistency between the persona texts and the response prototype, thus the inappropriate words can be deleted from the prototype. This model is trained to estimate the sentencelevel entailment category (Bowman et al., 2015) of a response for the given persona texts, which includes entailment, neutral and contradiction. The key is that if the category is not entailment, we can delete the most contributing words by replacing them with a special mask token, thus giving the model a chance to rephrase. The attention weights can measure each word’s contribution. The architecture of our consistency matching model is shown in Figure 3. From bottom to top are the self-attentive encoding layer, cross attention layer, and consistency matching layer. As described in section 3.2, the self-attentive encoder (SAE(·)) performs self-attention over the input to get sentence representations. Because the task of consistency matching is quite different from dialogue generation, we did not share the encoders between the generator G and matching model D: ¯A = SAED(P), (14) ¯B = SAED( ˆY (1)), (15) where ¯A is a hidden size × n matrix. ¯A = [¯a1, ¯a2, ..., ¯an] and ¯B = [¯b1,¯b2, ...,¯bm]. The n and m are the number of words in persona P and response prototype ˆY (1). Here we applied average pooling stragety (Liu et al., 2016; Chen et al., 2017) 5825 to get the summary representations: ¯a0 = n X i=1 ¯ai n , (16) and we can get the response attention weights and attentive response representations by: Wb = ¯a⊤ 0 ¯B, (17) eB = Wb¯B⊤, (18) where Wb is attention weights and eB is response representations. Similarly, we can get Wa and eA. Once eA and eB are generated, three matching methods (Chen et al., 2017) are applied to extract relations: concatenation, element-wise product, element-wise difference. The results of these matching methods are concatenated to feed into a multi-layer perceptron, which has three layers and tanh activation in between. The output is followed up by a SoftMax function to produce probabilities. In the inference process, as shown in Figure 3, the response attention weights Wb is leveraged to illustrate the inconsistent words, which will be deleted1. In practice, we use a simple heuristic rule for deleting words: as long as the category is not entailment, we will delete 10% of the words (at least one word)2, with the highest attention weight, in the prototype ˆY (1). In this way, we get the masked prototype ˆY (2). 3.4 Rewrite: Masked Prototype Rewriter The rewriter R takes the masked prototype and persona texts as input and outputs the final response. R is also a transformer decoder, which is similar to the decoder of G in section 3.2, but with a minor difference: the masked prototype is close to the target response, thus the direct attention between the prototype and target response is needless. The architecture of R can be seen in the third part of Figure 2, which can be formalized as: O(NG) mp = SAEG( ˆY (2)), (19) V(n) = MultiHead(O(n−1) rw ), O(n−1) rw ), O(n−1) rw ), (20) S(n) = MultiHead(V(n), O(NG) p , O(NG) p ), (21) K(n) = MultiHead(S(n), O(NG) mp , O(NG) mp ), (22) O(n) rw = FNN(mean(S(n), K(n))), (23) 1In this paper, “delete” a word means replacing this word with a special mask token. 2In our experiments, we found that deleting more words made it difficult for rewriter R to learn. Data Train Valid Test Persona Texts 74,522 5,843 4,483 Q-R Pairs 121,880 9,558 7,801 Table 1: Some statistics of Persona-Chat dataset. Valid denotes Validate and Q-R denotes Query-Response. Label Train Valid Test Entailment 100,000 5,500 5,400 Neutral 100,000 5,500 5,400 Contradiction 110,110 5,500 5,700 Table 2: Key statistics of DNLI dataset. where O(NG) mp is the encoded masked prototype and SAEG is the self-attentive encoder of G. O(NG) p is the encoded persona. After NR identical layers, the same generation process as in G is applied to the O(NR) rw , and we can get the final response ˆY (3). 3.5 Training The consistency matching model D is trained separately from the prototype generator G and rewriter R. As forementioned, the matching model D is trained in a natural language inference fashion on the DNLI dataset (Welleck et al., 2019), which has been well defined by the previous studies (Bowman et al., 2015; Chen et al., 2017; Gong et al., 2018). We minimize the CrossEntropy loss between the outputs of D and the ground truth labels. The G and R share the same training targets. We trained them by the standard maximum likelihood estimate. Notice that there are two different deleting strategies in training: (1) In the warm-up phase, because the G can hardly generate high-quality prototypes at this period, we randomly delete each word, with a 10% probability, from the prototype. (2) After that, the trained consistency matching model D is leveraged to delete words. 4 Experiments 4.1 Datasets We carried out the persona-based dialogue generation experiments on the public available PersonaChat dataset (Zhang et al., 2018). Furthermore, we trained the consistency matching model on the recently released Dialogue Natural Language Inference (DNLI) dataset (Welleck et al., 2019). We show the statistics of the Persona-Chat 5826 dataset in Table 1. The DNLI dataset (Welleck et al., 2019) is an enhancement to the Persona-Chat. It is composed of persona-utterance pairs from the Persona-Chat, and these pairs are further labeled as entailment, neutral, and contradiction. Some statistics of this dataset are given in Table 2. 4.2 Compared Models To the best of our knowledge, this is an early work in modeling explicit persona consistency. To show the effectiveness of our models, we mainly compare it with the persona-based dialogue models: • S2SA S2SA is an RNN-based attentive seq2seq model (Bahdanau et al., 2014). It only takes the query as input. • Per-S2SA This is a seq2seq model that prepends all persona texts to the query as input (Zhang et al., 2018). • GPMN Generative Profile Memory Network is an RNN-based model that encodes persona texts as individual memory representations in a memory network (Zhang et al., 2018). • DeepCopy An RNN-based hierarchical pointer network, which leverages copy mechanism to integrate persona (Yavuz et al., 2019). • Per-CVAE This is a memory augmented CVAE model to exploit persona texts for diverse response generation (Song et al., 2019a). • Transformer Different from the RNN-based models, transformer is a self-attention based sequence transduction model (Vaswani et al., 2017). The persona texts are concatenated to the query to serve as its input. 4.3 Experimental Settings For all the RNN-based baseline models, they are implemented by two-layer LSTM networks with a hidden size 512. For the Transformer, the hidden size is also set to 512, and the layers of both encoder and decoder are 3. The number of heads in multi-head attention is 8, and the inner-layer size of the feedforward network is 2048. The word embeddings are randomly initialized, and the embedding dimension of all models is set to 512. Our model applies the same parameter settings as the transformer. The number of layers NG = ND = NR = 3. G and R share the word embeddings, but the matching model D uses independent embeddings. We use token-level batching with a size 4096. Adam is used for optimization, and the warm-up steps are set to 10,000. We implemented the model in OpenNMT-py (Klein et al., 2017). 4.4 Evaluation Metrics In the evaluation, there are two essential factors to consider: persona consistency and response quality. We apply both human evaluations and automatic metrics on these two aspects to compare different models. Human Evaluation We recruit five professional annotators from a third-party company. These annotators have high-level language skills but know nothing about the models. We sampled 200 persona-query-response tuples per model for evaluation. Duplicated queries (such as greetings which appear more than once) will not be sampled twice. First, we evaluate the persona consistency of a response. The annotators are asked to decide whether the response is consistent with the given persona. 0 indicates irrelevant or contradictory and 1 indicates consistent (Const.). Second, we evaluate the quality of a response on three conventional criteria: fluency (Fluc.), relevance (Relv.), and informativeness (Info.). Each aspect is rated on five-scale, where 1, 3, and 5 indicate unacceptable, moderate, and excellent performance, respectively. 2 and 4 are used for unsure. Automatic Metrics Dziri et al. (2019) has shown that natural language inference based entailment ratio can be used as an indicator of dialogue consistency. Here we trained two well-performed NLI models, DIIN (Gong et al., 2018) and BERT (Devlin et al., 2019), to automatically predict the category of persona-response pairs, and we calculated the ratio of entailment as an additional reference to the persona consistency. In our experiments, DIIN and BERT achieved 88.78% and 89.19% accuracy on the DNLI test set, respectively, compared with previous best results 88.20%. The trained models are then leveraged for calculating entailment ratios. Two model-based entailment ratios are abbreviated as Entdiin and Entbert. For dialogue quality, we follow Zhang et al. (2018) to use perplexity (PPL) to measure the fluency of responses. Lower perplexity means better fluency. Besides, we also use Dist-1 / Dist-2 (Li et al., 2016a) to examine the model’s ability to generate diverse responses, which is the ratio of distinct uni-grams / bi-grams. 5827 Model Const. Fluc. Relv. Info. PPL Dist-1. Dist-2. Entdiin Entbert S2SA 15.9% 3.17 2.84 2.63 34.8 1.92 4.86 9.80% 1.83% GPMN 34.8% 3.78 3.57 3.76† 34.1 1.89 7.53 14.5% 7.36% Per-S2S 35.3% 3.43 3.22 3.32 36.1 2.01 7.31 13.5% 6.15% DeepCopy 36.0% 3.26 3.08 2.87 41.2 2.35 8.93 16.7% 8.81% Transformer 38.8% 3.46 3.65† 3.54 27.9 3.12 15.8 14.2% 9.52% Per-CVAE 42.7% 3.53 2.97 3.66 -∗ 3.83† 20.9 17.2% 7.36% GDR (ours) 49.2% 3.86 3.68 3.77 16.7 3.66 22.7 21.5% 13.0% Table 3: Results of human evaluations (on the left) and automatic metrics (on the right). The Dist-1.& 2. are scaled by 10−2. Significant tests (t-test) are performed, and our method is significantly better than all methods on most metrics (p-value<0.05), with the exceptions marked by †. We also present two model-based ratios, the Entdiin and the Entbert, as an additional reference for persona consistency assessments. Note that the automatic metrics are calculated on the whole test set. * The sampling process in CVAE leads to very unstable PPL. GDR vs Win(%) Tie(%) Lose(%) S2SA 48.0 38.2 13.8 Per-CVAE 46.1 29.8 24.1 DeepCopy 43.8 35.5 20.7 Per-S2S 41.3 36.1 22.6 GPMN 35.0 31.0 34.0 Transformer 34.7 32.1 33.2 Table 4: GDR response quality gains over other baseline methods on a pairwise human judgment. 4.5 Main Results We report the main evaluation results in Table 3. Compared with baseline methods, our GDR model obtains the highest consistency score of 49.2% in human evaluation, which is significantly better than other methods. The target responses in the sampled data are also annotated, and 65.4% of them expressed persona information. Moreover, the two model-based entailment ratios, which are calculated on the whole test set, also prove the effectiveness of our GDR model. Although the two NLI models differ in results, our GDR model ranks first under the evaluation of both DIIN and BERT. For dialogue quality, our proposed model has a remarkably lower perplexity of 16.7 than all other baseline methods. An analysis can be seen in Section 4.6. Besides, our distinct-2 metric is even significantly better than the Per-CVAE model, which is designed to generate diverse responses. Additionally, we carried out pairwise response comparison to see the dialogue quality gains. We report the results in Table 4. While the GDR model significantly improves persona consistency, it can still generate high-quality responses like the transformer and GPMN. 4.6 More Analysis As the proposed model achieves better performance than baseline methods, we turn to ablation tests to further quantify the contributions made by different components. We ablated our model through several different approaches: • GR It removes the matching model D, i.e., generates a prototype and rewrites it directly. • GRdR This approach replaces the matching model D with 10% random deleting (Rd), thus to see if the masked prototype, extracted by our matching model D, is beneficial. • G Our model’s generator, without further consistency matching and rewriting. • T It is a transformer generator but removes the tuple-interaction in section 3.2 and directly concatenates persona texts to the query. This model is equivalent to the vanilla transformer. We report the results in Table 5. First, we look into which components contribute to the consistency. As seen, from T, G, GR to GDR, every step has an observable improvement in Const., indicating the effectiveness of our model’s design. Both the tuple-interaction in G and the rewriting process in R contribute to the improvements of persona consistency. The GRdR approach, with nothing different from GDR but a random deleting strategy, serves as a foil to our GDR model, which indicates a well-learned consistency matching model is of 5828 Model Const. Fluc. Relv. Info. PPL GDR 49.2% 3.86 3.68 3.77 16.7 GR 42.4% 3.72 3.40 3.66 18.0 GRdR 40.0% 3.60 3.29 3.56 20.6 G 40.1% 3.69 3.35 3.55 26.3 T 38.8% 3.46 3.65‡ 3.54 27.9 Table 5: Results of the ablation study. GDR is significantly better than the ablated approaches, with an only exception marked by ‡. GDR vs Win(%) Tie(%) Lose(%) GRdR 41.7 39.5 18.8 GR 39.9 40.9 19.2 G 38.1 35.8 26.1 Table 6: Pairwise human judgment on response quality. great benefit to our three-stage generation framework to generate persona consistent dialogues. Second, we investigated the improvement of our perplexity. As we can see, the one-stage transformer approaches G and T have a perplexity higher than 26. In contrast, after we add the rewriter R, the perplexity of all approaches has a significant decline, no matter whether there is a matching model D. Lower perplexity means lower cross-entropy, which indicates the responses from the models have more ground truth words. To some extent, perplexity verifies the human evaluation results of the consistency. One reason for this improvement could be that the rewriter works like a denoising autoencoder (Vincent et al., 2008), and it can focus more on the reconstruction of the missing information of sequence itself, rather than learning to map a sequence to an entirely different one. We observed that the relevance scores of GR, GRdR, and G are a little inferior to the T. Even the GDR model is not significantly better than T on the relevance score. One plausible explanation is that all these models are specially designed for integrating persona information, although they obtain much better consistency score, it may come at the cost of relevance score. Moreover, we compared the GDR’s response quality with three ablated models and reported it in Table 6. As we can see, the deleting and rewriting, which are designed for improving consistency, also have a positive effect on the dialogue quality. At last, we presented some generated examples Persona i. My mother is a dentist ii. I’m currently looking for a job Query I want to become a physical therapist. Gen&Del I wish I could be a nurse . Rewrite I want to be a dentist like my mom. w/o Delete I want to become a nurse like you. Transfmr I hear you. I am a nurse and a mom. Per-S2S What kind of business? GPMN I’m a professional painter, you? DeepCopy What do you do for a living? Per-CVAE I guess I would buy you one Persona i. Gymnastics is my favorite sport ii. I’m a gold medalist olympian Query I’m a minivan driver. I drive around. Gen&Del That’s awesome . I love to ski . Rewrite That’s cool. I am a champion gymnast. w/o Delete That’s awesome. I am going to ski. Transfmr Oh. I am still in high school . Per-S2S I drive a big van, but around the house. GPMN I drive a FIAT. I love it. DeepCopy What do you do for a living? Per-CVAE That is why you like dogs? Tell me. Persona i. I like to grill outdoors ii. I have been retired for years Query Too old to work in construction. You? Gen&Del I’m retired . I spend time outdoors . Rewrite I’m retired. I spend most time outdoors. w/o Delete I’m retired. I spend most time outdoors. Transfmr I am a boy and work in the construction. Per-S2S No, I am a student. GPMN I am a teacher, I love to be a teacher. DeepCopy I work in the construction industry. Per-CVAE I am a retired officer I love my bike ride. Table 7: Example responses from different models, with a visualization of the consistency matching weights. Strikethrough words are the masked words in Delete stage. The w/o Delete is the ablated model GR in section 4.6, and Transfmr is short for Transformer. in Table 7, together with the visualization of attention weights from match module D. In the first case, although the generated prototype is neutral regarding the persona, the word “nurse” is still masked according to our strategy. And after the rewriting stage, the final response expresses persona. In the second case, the prototype is potentially contradictory to the persona, and the keyword is successfully deleted by the matching model D. In the third case, the prototype is consistent with the persona, and no word is deleted. As a result, the final output response is the same as the output of no deletion model GR. In these cases, both consistency and quality are improved after the final rewriting. 5829 5 Conclusion and Future Work In this paper, we presented a three-stage framework, Generate-Delete-Rewrite, for persona consistent dialogue generation. Our method adopts transformer architecture and integrates a matching model to delete the inconsistent words. Experiments are carried out on public-available datasets. Both human evaluations and automatic metrics show that our method achieves remarkably good performance. In the future, we plan to extend our approach to improve the consistency of multi-turn dialogues. Acknowledgments This paper is supported by the National Natural Science Foundation of China under Grant No.61772153 and No.61936010. Besides, we want to acknowledge the Heilongjiang Province Art Planning Project 2019C027 and the Heilongjiang Province Social Science Research Project 18TQB100. We also would like to thank all the anonymous reviewers for their helpful comments and suggestions. References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642. Deng Cai, Yan Wang, Wei Bi, Zhaopeng Tu, Xiaojiang Liu, Wai Lam, and Shuming Shi. 2019a. Skeleton-to-response: Dialogue generation guided by retrieval memory. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1219–1228, Minneapolis, Minnesota. Association for Computational Linguistics. Deng Cai, Yan Wang, Wei Bi, Zhaopeng Tu, Xiaojiang Liu, and Shuming Shi. 2019b. Retrievalguided dialogue response generation via a matchingto-generation framework. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1866–1875, Hong Kong, China. Association for Computational Linguistics. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1657–1668, Vancouver, Canada. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241. Nouha Dziri, Ehsan Kamalloo, Kory Mathewson, and Osmar Zaiane. 2019. Evaluating coherence in dialogue systems using entailment. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3806–3812, Minneapolis, Minnesota. Association for Computational Linguistics. Yichen Gong, Heng Luo, and Jian Zhang. 2018. Natural language inference over interaction space. In International Conference on Learning Representations. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Opensource toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67–72, Vancouver, Canada. Association for Computational Linguistics. Wenqiang Lei, Xiangnan He, Yisong Miao, Qingyun Wu, Richang Hong, Min-Yen Kan, and Tat-Seng Chua. 2020. Estimation-action-reflection: Towards deep interaction between conversational and recommender systems. In Proceedings of the 13th International Conference on Web Search and Data Mining, pages 304–312. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics. 5830 Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 994–1003, Berlin, Germany. Association for Computational Linguistics. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016c. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas. Association for Computational Linguistics. Jiwei Li, Will Monroe, Tianlin Shi, S´ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157–2169. Hui Liu, Qingyu Yin, and William Yang Wang. 2019. Towards explainable NLP: A generative explanation framework for text classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5570–5581, Florence, Italy. Association for Computational Linguistics. Qian Liu, Yihong Chen, Bei Chen, Jian-Guang Lou, Zixuan Chen, Bin Zhou, and Dongmei Zhang. 2020. You impress me: Dialogue generation via mutual persona perception. arXiv preprint arXiv:2004.05388. Yang Liu, Chengjie Sun, Lei Lin, and Xiaolong Wang. 2016. Learning natural language inference using bidirectional lstm model and inner-attention. arXiv preprint arXiv:1605.09090. Qiao Qian, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2017. Assigning personality/identity to a chatting machine for coherent conversation generation. arXiv preprint arXiv:1706.02861. Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, and Joelle Pineau. 2015. A survey of available corpora for building data-driven dialogue systems. arXiv preprint arXiv:1512.05742. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In AAAI, volume 16, pages 3776–3784. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. arXiv preprint arXiv:1503.02364. Haoyu Song, Wei-Nan Zhang, Yiming Cui, Dong Wang, and Ting Liu. 2019a. Exploiting persona information for diverse generation of conversational responses. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pages 5190–5196. Haoyu Song, Wei-Nan Zhang, Jingwen Hu, and Ting Liu. 2019b. Generating persona consistent dialogues by exploiting natural language inference. arXiv preprint arXiv:1911.05889. Hui Su, Xiaoyu Shen, Rongzhi Zhang, Fei Sun, Pengwei Hu, Cheng Niu, and Jie Zhou. 2019. Improving multi-turn dialogue modelling with utterance ReWriter. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 22–31, Florence, Italy. Association for Computational Linguistics. Yixuan Su, Yan Wang, Simon Baker, Deng Cai, Xiaojiang Liu, Anna Korhonen, and Nigel Collier. 2020. Prototype-to-style: Dialogue generation with styleaware editing on retrieval memory. arXiv preprint arXiv:2004.02214. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096–1103. ACM. Oriol Vinyals and Quoc Le. 2015. A neural conversational model. arXiv preprint arXiv:1506.05869. Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019. Dialogue natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3731–3741, Florence, Italy. Association for Computational Linguistics. Jason Weston, Emily Dinan, and Alexander Miller. 2018. Retrieve and refine: Improved sequence generation models for dialogue. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI, pages 87–92. Yu Wu, Furu Wei, Shaohan Huang, Yunli Wang, Zhoujun Li, and Ming Zhou. 2019. Response generation by context-aware prototype editing. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7281–7288. 5831 Semih Yavuz, Abhinav Rastogi, Guan-Lin Chao, and Dilek Hakkani-Tur. 2019. Deepcopy: Grounded response generation with hierarchical pointer networks. arXiv preprint arXiv:1908.10731. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204– 2213, Melbourne, Australia. Association for Computational Linguistics. Wei-Nan Zhang, Qingfu Zhu, Yifa Wang, Yanyan Zhao, and Ting Liu. 2019. Neural personalized response generation as domain adaptation. World Wide Web, 22(4):1427–1446. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 654–664. Yinhe Zheng, Guanyi Chen, Minlie Huang, Song Liu, and Xuan Zhu. 2019a. Personalized dialogue generation with diversified traits. arXiv preprint arXiv:1901.09672. Yinhe Zheng, Rongsheng Zhang, Xiaoxi Mao, and Minlie Huang. 2019b. A pre-training based personalized dialogue generation model with personasparse data. arXiv preprint arXiv:1911.04700.
2020
516
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5832–5841 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5832 Learning to Customize Model Structures for Few-shot Dialogue Generation Tasks Yiping Song1, Zequn Liu1, Wei Bi2, Rui Yan3, Ming Zhang1∗ 1Department of Computer Science, School of EECS, Peking University, Beijing, China 2Tencent AI Lab, Shenzhen, China 3Wangxuan Institute of Computer Technology, Peking University, Beijing, China songyiping,zequnliu,mzhang cs,[email protected] [email protected] Abstract Training the generative models with minimal corpus is one of the critical challenges for building open-domain dialogue systems. Existing methods tend to use the meta-learning framework which pre-trains the parameters on all non-target tasks then fine-tunes on the target task. However, fine-tuning distinguishes tasks from the parameter perspective but ignores the model-structure perspective, resulting in similar dialogue models for different tasks. In this paper, we propose an algorithm that can customize a unique dialogue model for each task in the few-shot setting. In our approach, each dialogue model consists of a shared module, a gating module, and a private module. The first two modules are shared among all the tasks, while the third one will differentiate into different network structures to better capture the characteristics of the corresponding task. The extensive experiments on two datasets show that our method outperforms all the baselines in terms of task consistency, response quality, and diversity. 1 Introduction Generative dialogue models often require a large amount of dialogues for training, and it is challenging to build models that can adapt to new domains or tasks with limited data. With recent advances in large-scale pre-training [Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2018], we can first pre-train a generative model on large-scale dialogues from the non-target domains and then fine-tune on the task-specific data corpus [Wang et al., 2019a; Alt et al., 2019a; Klein, 2019]. While pre-training is beneficial, such models still require sufficient taskspecific data for fine-tuning. They cannot achieve satisfying performance when very few examples ∗Corresponding author are given [Bansal et al., 2019]. Unfortunately, this is often the case in many dialogue generation scenarios. For example, in personalized dialogue generation, we need to quickly adapt to the response style of a user’s persona by just a few his or her dialogues [Madotto et al., 2019; Zhang et al., 2018]; in emotional dialogue generation, we need to generate a response catering to a new emoji using very few utterances containing this emoji [Zhou et al., 18; Zhou and Wang, 2018]. Hence, this is the focus of our paper - few-shot dialogue generation, i.e. training a generative model that can be generalized to a new task (domain) within k-shots of its dialogues. A few works have been proposed to consider few-shot dialogue generation as a meta-learning problem [Madotto et al., 2019; Qian and Yu, 2019; Mi et al., 2019]. They all rely on the popular modelagnostic meta-learning (MAML) method [Finn et al., 2017]. Take building personalized dialogue models as an example, previous work treats learning dialogues with different personas as different tasks [Madotto et al., 2019; Qian and Yu, 2019]. They employ MAML to find an initialization of model parameters by maximizing the sensitivity of the loss function when applied to new tasks. For a target task, its dialogue model is obtained by finetuning the initial parameters from MAML with its task-specific training samples. Despite the apparent success in few-shot dialogue generation, MAML still has limitations [Zintgraf et al., 2019]. The goal of generative dialogue models is to build a function mapping a user query to its response, where the function is determined by both the model structure and parameters [Brock et al., 2018]. By fine-tuning with a fixed model structure, MAML only searches the optimal parameter settings in the parameter optimization perspective but ignores the search of optimal network structures in the structure optimization perspective. More5833 over, language data are inherently discrete and dialogue models are less vulnerable to input changes than image-related models [Niu and Bansal, 2018], which means gradients calculated from a few sentences may not be enough to change the output word from one to another. Thus there is a need to develop an effective way to adjust MAML for large model diversity in dialogue generation tasks. In this paper, we propose the Customized Model Agnostic Meta-Learning algorithm (CMAML) that is able to customize dialogue models in both parameter and model structure perspective under the MAML framework. The dialogue model of each task consists of three parts: a shared module to learn the general language generation ability and common characteristics among tasks, a private module to model the unique characteristic of this task, and a gate to absorb information from both shared and private modules then generate the final outputs. The network structure and parameters of the shared and gating modules are shared among all tasks, while the private module starts from the same network but differentiates into different structures to capture the task-specific characteristics. In summary, our contributions are as follows: • We propose the CMAML algorithm that can customize dialogue models with different network structures for different tasks in the few-shot setting. The algorithm is general and well unified to adapt to various few-shot generation scenarios. • We propose a pruning algorithm that can adjust the network structure for better fitting the training data. We use this strategy to customize unique dialogue models for different tasks. • We investigate two crucial impact factors for meta-learning based methods, i.e., the quantity of training data and task similarity. We then describe the situations where the meta-learning can outperform other fine-tuning methods. 2 Related Work Few-shot Dialogue Generation. The past few years have seen increasing attention on building dialogue models in few-shot settings, such as personalized chatbots that can quickly adapt to each user’s profile or knowledge background [Zhang et al., 2018; Madotto et al., 2019], or that respond with a specified emotion [Zhou et al., 18; Zhou and Wang, 2018]. Early solutions are to use explicit [Tian et al., 2017; Zhang et al., 2018; Zhou et al., 18] or implicit [Li et al., 2016b; Zhou and Wang, 2018; Zhou et al., 18] task descriptions, then introduce this information into the generative models. However, these methods require manually created task descriptions, which are not available in many practical cases. An alternative promising solution to building few-shot dialogue models is the meta-learning methods, especially MAML [Finn et al., 2017]. Madotto et al. (2019) propose to regard learning with the dialogue corpus of each user as a task and endow the personalized dialogue models by fine-tuning the initialized parameters on the taskspecific data. Qian and Yu (2019) and Mi et al. (2019) treat the learning from each domain in multidomain task-oriented dialogue generation as a task, and apply MAML in a similar way. All these methods do not change the original MAML but directly apply it to their scenarios due to the model-agnostic property of MAML. Thus, task differentiation always counts on fine-tuning, which only searches the best model for each task at the parameter level but not the model structure level. Meta-learning. Meta-learning has achieved promising results in many NLP problems recently due to its fast adaptation ability on a new task using very few training data [Yu et al., 2019; Wang et al., 2019b; Obamuyide and Vlachos, 2019b; Alt et al., 2019b]. In general, there are three categories of meta-learning methods: metric-based methods [Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018; Ye and Ling, 2019] which encode the samples into an embedding space along with a learned distance metric and then apply a matching algorithm, model-based methods [Santoro et al., 2016; Obamuyide and Vlachos, 2019a] which depend on the model structure design such as an external memory storage to facilitate the learning process, and optimization-based methods [Finn et al., 2017; Andrychowicz et al., 2016; Huang et al., 2018] which learn a good network initialization from which fine-tuning can converge to the optimal point for a new task with only a few examples. Methods belonging to the first two are proposed for classification, and those in the third category are model-agnostic. Therefore, it is intuitive to apply the optimization-based methods, in which MAML is most popular, for dialogue generation tasks. However, some researchers found that the original MAML has limited ability to model taskspecific characteristics in the image or text classification scenarios [Jiang et al., 2018; Sun et al., 5834 Really I Really Emm Really , ? ? I I I like don’t like cats know cats … ? I I hate hate pets pets Similar tasks Similar tasks All tasks I have a dog Query Encoder User-i Decoder User-j Decoder Training Corpus shared module gating module private module / tasks that are chosen/not for training data flow similar tasks Generative Models Figure 1: The proposed CMAML algorithm applying on the personalized dialogue systems. Each customized dialogue model Seq2SPG consists of a shared, a private, and a gating module. The shared and gating module are the same among users and are trained on all tasks. The private module is unique for each user to describe this user’s persona, and is trained on the corresponding and similar tasks. The lines in color indicate the data flow directions. 2019; Liu et al., 2020]. Jiang et al. (2018) build an attention layer over the convolutional layers, where the convolutional layer is for general features and the attention layer is for task-specific features. Sun et al. (2019) propose to learn a task-specific shifting and scaling operation on the general shared feed-forward layers. However, the involved operations in these two methods such as shifting and scaling are designed for feed-forward networks, and can not be applied to the generative models which generally rely on Seq2seq [Sutskever et al., 2014] models with recurrent GRU [Cho et al., 2014] or LSTM [Hochreiter and Schmidhuber, 1997] cells. In this paper, we propose a new meta-learning algorithm based on MAML that can enhance task-specific characteristics for generation models. 3 Dialogue Model In this section, we firstly describe the network structure of the proposed dialogue model, and then briefly introduce its pre-training. 3.1 Model Architecture We aim to build dialogue models for different generation tasks in the few-shot setting. Now, we first describe the dialogue model of each task to be used in our training algorithm. It involves three network modules and noted as Seq2SPG (in Figure 1): Shared Module. It gains the basic ability to generate a sentence and thus its parameters are shared among all tasks. We employ a prevailing Seq2seq dialogue model [Bahdanau et al., 2014]. At each decoding step t, we feed the word xt and last hidden state ht−1 to the decoding cell, and obtain an output distribution os over the vocabulary. Private Module. It aims at modeling the unique characteristics of each task. We design a multilayer perception (MLP) in the decoder to fulfill this goal. Each task has its unique MLP network, which starts from the same initialization and then evolves into different structures during training. At each decoding step t, the MLP takes the word xt and the output ht−1 of the shared module at step t −1 as input, then outputs a distribution op over the vocabulary. In our experiments, we also explore different inputs for the private module. Gating Module. We use a gate to fuse information from the shared and private modules: gs = tanh(Ws[os, op] + bs) gp = tanh(Wp[os, op] + bp) o = gs ◦os + gp ◦op (1) where Ws, Wp, bs, bp are parameters, ◦is elementwise product, and o is the word distribution. 3.2 Training Overview For the rest of the paper, p(T ) denotes the task distribution, Ti denotes the i-th task to be trained, Dtrain i and Dvalid i denotes the training and validation corpus of task Ti, and θi denotes all training parameters of the dialogue model for Ti, which include parameters θs/θp i /θg in the shared/private/gating module respectively. we consider a model represented by a parameterized function f with parameters θ. The model training for all tasks consists of two steps: pre-training and customized model training. In pre-training, CMAML employs the vanilla MAML to obtain a pre-trained dialogue model as the initial model θ for all tasks. At the beginning of the MAML, θ are randomly initialized. Then, two 5835 main procedures perform iteratively: meta-training and meta-testing. In meta-training, MAML first samples a set of tasks Ti∼p(T ). Then, for each task i, MAML adapts θ to get θ′ i with the taskspecific data, which is, θ′ i = θ −α∇θLDtrain i (f(θ)) (2) In the meta-testing, MAML tests tasks Ti∼p(T ) with θ′ i to obtain the losses and then updates θ by θ = θ −β∇θ X Ti∼p(T ) LDvalid i (f(θ′ i)) (3) Here, α and β are hyper-parameters. In standard MAML, each task obtains its parameters θi by fine-tuning the pre-trained θ. However, recall that fine-tuning fails to search the best model in the network structure perspective. Also, the generative models are less vulnerable to input changes, thus a few utterances may not be enough to adapt θ into diverse θi for different tasks. To address these issues, we do not perform direct fine-tuning on each task, but design our second training step - Customized Model Training, in which the pretrained private module can evolve into different structures to capture the characteristics of each task and encourage model diversity. 4 Customized Model Training After obtaining the pre-trained model θ from MAML, we employ Customized Mode Training with the following two updating steps: • Private Network Pruning. This step is applied for the private module only, which is to differentiate the MLP structure of each task. Each task has a different MLP structure by retaining its own subset of active MLP parameters in order to characterize the uniqueness of this task. • Joint Meta-learning. In this step, we re-train parameters of all three modules of each task using MAML again, but each private module is with its pruned MLP structure now. Also, similar tasks with similar pruned MLP structures are jointly trained in order to enrich the training data. In the following, we will describe these two steps respectively as well as the gradient update of the whole dialogue model. 4.1 Private Network Pruning After pre-training, dialogue models of different tasks remain the same parameters θ, including θs/θp/θg in the shared/private/gating module. In this step, the private module with parameters θp will evolve into different structures with parameters θp i to capture the task’s unique characteristics. First, we fine-tune the whole dialogue model of each task from the MAML initialization with its own training data and add an L-1 regularization on the parameters of the private module. The goal of L-1 regularization here is to make the parameters sparse such that only parameters beneficial to generate task-specific sentences are active. Second, we apply an up-to-bottom strategy to prune the private MLP for each task. This is equal to selecting edges in the fully connected layers in the MLP. We do not prune the layers connected to the input and output of the MLP. For the rest layers, we start the pruning from the one closest to the output first. For the l-th layer, we consider layers above it (> l) are closer to the output, and its lower layers (< l) are closer to the input. When we process the l-th layer, its upper layers should already be pruned. We only keep edges of the current processed layer whose weight excels a certain threshold γ. If all edges in the l layer connected to a node is pruned, all edges connected to this node in the l −1 layer will also be pruned. In this way, the parameters in private module θp differentiates into |T| parameters θp i , where each θp i is a subset of θp. The pruning algorithm described above is illustrated in Algorithm 1. 4.2 Joint Meta-learning So far, every task has a unique network structure in its private module. Now we jointly train the whole dialogue models of all tasks. We start from the pre-trained MAML initialization again. For the shared and gating modules, all tasks share the same parameters, and they are trained with all training data. The private module, which is to capture the uniqueness of each task, is supposed to be trained on task-specific data. However, we do not have sufficient training data for each task in the few-shot setting, thus the private module may not be trained well. Fortunately, all private modules evolve from the same MLP structure, and similar tasks naturally share overlapped network structures, i.e. remaining edges after pruning are overlapped. This inspires us to train each edge in the private MLP by all training samples of tasks in which this edge is not pruned. Concretely, we train the private MLP in this way: 5836 Algorithm 1: Private Network Pruning Input: All parameters θp in the private MLP module, the sparsity threshold γ, the total number of layers L in the private MLP module. Output: The pruned parameters θp i in private module for task Ti. Finetune θp on the training data of Ti with L-1 regularization to otain θp i . for j ∈{1, . . . , L} do Ej ←All edges (i.e. parameters w.r.t. each edge) in the j-th layer in θp i Nj ←All nodes in the j-th layer in θp i Ekeep ←E|L| ∪E1; k ←|L| −1;Nkeep ←N|L| ∪N1. while k > 1 do for each edge e in Ek do if e > γ and the node connected with e in Nk+1 is in Nkeep then Ekeep ←Ekeep ∪{e}. for each node n in Nk do for each edge e in Ek connected with n do if e in Ekeep then Nkeep ←Nkeep ∪{n}; break. k ←k −1. return Ekeep as θp i for each edge e in the MLP, if it is active in more than one tasks, its corresponding parameters θp e are updated on the data of all task j’s, in which the edge is active, i.e. θp e ∈θp j : θ′p e = θp e −α∇θp e X Tj:θp e∈θp j LDtrain j (f(θp j )) (4) where each θp i /θ′p i only contains the θp e/θ′p e ’s of all active edges in the i-th task. During meta-testing, the loss is accumulated by the tasks that use the corresponding dialogue models, so θp is updated as, θp = θp −β X Ti∼p(T ) ∇θp i LDvalid i (f(θ′p i )) (5) 4.3 Gradient Updates We summarize the gradient updates of the three modules in our proposed dialogue model during customized model training in Algorithm 2. For the shared and gating module, gradients are updated in the same way as MAML. The update of the private module is replaced by the above Eq. 4 and Eq. 5 introduced in joint meta-learning. The loss function used to calculate the gradients in our model is the negative log-likelihood of generating the response r given the input query q as, L = −log p(r|q, θs, θp, θg) (6) Algorithm 2: Customized Model Training Input: The distribution over the task set p(T ), the step size α and β. Output: The customized dialogue models θs ∪θp i ∪θg for every task Ti. for each Ti in T do θp i ←Private Network Pruning(Ti). while not converge do Sample a batch of tasks Ti∼p(T ). for each sampled task Ti do Adapt θs/θg to θ′s i /θ′g i with Dtrain i using Eq. 2; Adapt θp i to θ′p i with Dtrain i using Eq. 4. Update θs, θg with Dvalid i using Eq. 3. Update θp i with Dvalid i using Eq. 5. return θs ∪θp i ∪θg 5 Experiments 5.1 Datasets We perform experiments in Persona-chat [Madotto et al., 2019] and MojiTalk [Zhou and Wang, 2018], which are treated as few-shot dialogue generation tasks in previous work [Zhang et al., 2018; Madotto et al., 2019; Zhou and Wang, 2018; Zhou et al., 18]. Persona-chat has 1137/99/100 users for training/validation/evaluation, and each user has 121 utterances on average. We follow the previous work [Madotto et al., 2019] and concatenate all the contextual utterances including the query as the input sequence. We regard building a dialogue model for a user as a task on this dataset. MojiTalk has 50/6/8 emojis for training/validation/evaluation. Each training/validation emoji has 1000 training samples on average, and each evaluation emoji has 155 samples on average. We regard generating responses with a designated emoji as a task. On both datasets, the data ratio for meta-training and meta-testing is 10:1. 5.2 Implementation Details We implement our shared module based on the Seq2seq model with pre-trained Glove embedding [Pennington et al., 2014] and LSTM unit, and use a 4-layer MLP for the private module1. The dimension of word embedding, hidden state, and MLP’s output are set to 300. In CMAML, we pretrain the model for 10 epochs and re-train each model for 5 steps to prune the private network. The L-1 weight in the re-training stage is 0.001, and the threshold γ is 0.05. We follow other hyperparameter settings in Madotto et al. [2019]. 1Code is available at https://github.com/zequnl/CMAML 5837 5.3 Competing Methods • Pretrain-Only: We pre-train a unified dialogue generation model with data from all training tasks then directly test it on the testing tasks. We try three base generation models: the Seq2seq [Bahdanau et al., 2014] and the Speaker model [Li et al., 2016b] and the Seq2SPG proposed in Section3.1. Speaker incorporates the task (user/emoji) embeddings in the LSTM cell, and the task embeddings of testing tasks are random parameters in this setting. • Finetune: We fine-tune the pre-trained models on each testing task, denoted as Seq2seq-F, Speaker-F and Seq2SPG-F. • MAML [Madotto et al., 2019]: We apply the MAML algorithm on the base model Seq2seq and Seq2SPG, and note them as MAML-Seq2seq and MAML-Seq2SPG. MAML-Seq2SPG uses the same base model as the proposed CMAML but does not apply the pruning algorithm, which helps to verify the effectiveness of the pruning algorithm and joint meta-learning. Note that We did not apply MAML on Speaker model as it shows no improvement comparing with Seq2seq. • CMAML: We try two variants of our proposed algorithm. CMAML-Seq2SPG is our full model (equal to CMAML in previous sections), where the dialogue Seq2SPG is the base model and pruning algorithm is applied for customizing unique model structures for tasks. CMAML-Seq2SP′G uses a different base model noted as Seq2SP′G, where the private module only takes the output of the shared module as the input. Pruning algorithm is also applied in private module for network customization. 5.4 Evaluation Metrics Automatic Evaluation. We performed automatic evaluation metrics in three perspectives: • Response quality/diversity: We use BLEU [Papineni et al., 2002] to measure the word overlap between the reference and the generated sentence; PPL, the negative logarithm of the generated sentence; Dist-1 [Li et al., 2016a; Song et al., 2017, 2018] to evaluate the response diversity, which calculates the ratio of distinct 1-gram in all test generated responses. • Task consistency: We use C score [Madotto et al., 2019] in Persona-chat, which uses a pre-trained natural language inference model to measure the response consistency with persona description, and E-acc [Zhou and Wang, 2018] in MojiTalk, which uses an emotion classifier to predict the correlation between a response and the designated emotion. • Model difference: It is hard to measure the models ability of customization as we do not have the ground-truth model. Hence, we define the average model difference of pairwise tasks as the Diff Score of each method, and the model difference of a method before and after fine-tuning as ∆Score. The model difference between Ti and Tj is the Euclidean distance of their parameters normalized by their parameter count: D(Ti, Tj) = ||θi−θj||2 M . Here, θi/θj includes all model parameters of this task, M is the total parameter number of the model. A set of models that capture the unique characteristics of each task should be different from each other and will have a higher Diff score, indicating that a large Diff score is a sufficient condition for a strong customization ability. Similarly, a model that changes a lot for task specific adaptation during fine-tuning will achieve a higher ∆Score, indicating that ∆Score is also a sufficient condition for a good adaptation ability. Human Evaluation. We invited 3 well-educated graduated students to annotate the 100 generated replies for each method. For each dataset, the annotators are requested to grade each response in terms of “quality” and “task consistency” (i.e. personality consistency in Persona-Chat and emoji consistency in MojiTalk) independently in three scales: 2 (for good), 1 (for fair) and 0 (for bad). “quality” measures the appropriateness of replies, and we refer 2 for fluent, highly consistent (between query and reply), and informativeness, 1 for few grammar mistakes, moderate consistent, and universal reply, and 0 for incomprehensible or unrelated topic. “task consistency” measures whether a reply is consistent with the characteristics of a certain task, and we refer 2 for highly consistent, 1 for no conflicted and 0 for contradicted. Notice that the user description (Persona dataset) and sentences with a certain emoji (Mojitalk dataset) are provided as the references. Volunteers, instead of authors, conduct the double-blind annotations on shuffled samples to avoid subjective bias. 5.5 Overall Performance Quality/Diversity. In the Persona-chat dataset, Pretrain-Only methods provide the borderlines of all methods. In Pretrain-Only, Seq2SPG achieves the best performance in terms of both automatic and human measurements, indicating the appropri5838 Method Human Evaluation Automatic Metrics Model Difference Quality Task Consistency PPL BLEU Dist-1 C score/E-acc Diff Score ∆Score Persona-Chat Seq2seq 0.67 0.10 37.91 1.27 0.0019 -0.16 0.00 0.00 Speaker 0.85 0.10 40.17 1.25 0.0037 -0.14 0.00 0.00 Seq2SPG 0.67 0.03 36.46 1.41 0.0023 -0.14 0.00 0.00 Seq2seq-F 0.78 0.11 33.65 1.56 0.0046 -0.05 17.97 9.19 Speaker-F 0.87 0.25 35.61 1.52 0.0059 0.03 285.11 143.90 Seq2SPG-F 0.7 0.07 32.68 1.54 0.0045 -0.05 292.85 156.30 MAML-Seq2seq 0.97 0.37 37.43 1.54 0.0087 0.14 134.01 67.79 MAML-Seq2SPG 0.85 0.36 35.89 1.70 0.0074 0.16 401.28 198.90 CMAML-Seq2SP′G 0.98 0.58 37.32 1.43 0.0089 0.15 479.21 238.64 CMAML-Seq2SPG 1.15 0.69 36.30 1.70 0.0097 0.18 514.44 263.82 MojiTalk Seq2seq 0.56 0.39 218.95 0.36 0.0342 0.73 0.00 0.00 Speaker 0.38 0.26 418.96 0.19 0.0530 0.70 0.00 0.00 Seq2SPG 0.77 0.46 158.74 0.64 0.0239 0.74 0.00 0.00 Seq2seq-F 0.50 0.35 217.60 0.40 0.0326 0.72 15.96 8.88 Speaker-F 0.39 0.25 403.92 0.21 0.0528 0.72 39.08 29.10 Seq2SPG-F 0.76 0.47 157.92 0.65 0.0228 0.74 72.43 40.94 MAML-Seq2seq 0.66 0.29 179.02 0.54 0.0109 0.70 183.05 117.09 MAML-Seq2SPG 0.71 0.40 181.56 0.73 0.0246 0.74 306.40 176.31 CMAML-Seq2SP′G 0.64 0.32 172.92 0.76 0.0102 0.75 142.90 81.15 CMAML-Seq2SPG 0.78 0.49 185.97 0.85 0.0210 0.77 345.42 190.64 Table 1: Overall performance in Persona-chat (top) and MojiTalk (bottom) dataset in terms of quality (Human, Perplexity, BLEU), diversity (Dist-1), task consistency (Human, C score, E-acc), structure differences among tasks (Diff Score (×10−10)), model change after adaptation (∆score (×10−10)). Method 100-shot 110-shot Similar Users Dissimilar Users PPL BLEU C score PPL BLEU C score PPL BLEU C score PPL BLEU C score Seq2seq 38.13 1.19 -0.11 37.58 1.29 -0.15 76.54 1.49 -0.03 42.87 1.10 -0.10 Speaker 40.95 1.02 -0.25 42.59 1.27 -0.06 162.44 0.65 -0.09 46.86 1.11 -0.13 Seq2SPG 39.75 1.27 -0.10 37.71 1.30 -0.15 73.58 1.32 -0.04 42.21 1.14 -0.22 Seq2seq-F 34.86 1.39 -0.03 34.14 1.52 -0.10 74.53 1.53 -0.07 42.33 1.33 -0.06 Speaker-F 37.11 1.30 -0.16 39.10 1.36 -0.06 103.81 1.04 0.04 40.47 1.40 0.01 Seq2SPG-F 37.19 1.31 0.00 37.00 1.33 -0.15 70.15 1.44 -0.04 36.22 1.35 -0.05 MAML-Seq2seq 36.94 1.47 0.03 37.20 1.53 0.07 83.17 1.52 -0.08 39.67 1.34 0.06 MAML-Seq2SPG 36.50 1.52 0.11 35.98 1.47 0.13 82.37 1.52 -0.06 39.41 1.41 0.12 CMAML-Seq2SP′G 37.18 1.46 0.11 37.08 1.44 0.09 82.56 1.50 0.00 40.50 1.40 0.13 CMAML-Seq2SPG 36.52 1.52 0.14 36.44 1.57 0.15 82.78 1.56 -0.07 39.55 1.43 0.16 Table 2: The performance on the Persona-chat dataset for impact factor analysis. The left figure is about the few-shot settings and the right is about the task similarity. ateness of the proposed model structure. Finetune methods are better than Pretrain-Only methods in most cases. MAML methods have no better performance on BLEU scores than Finetune methods but have relatively higher Dist-1 scores. This indicates that MAML helps to boost response diversity. Enhanced with the proposed pruning algorithm, we can see great improvement for CMAML methods against all the competing methods on both quality and diversity measurements. Particularly, our full model CMAML-Seq2SPG shows clearly better performance and the reasons can be ascribed to two aspects: firstly, the proposed Seq2SPG has a better model structure for our task and secondly, the pruning algorithm makes the models more likely to generate a user-coherent response. Most of the performance of the competing methods in the MojiTalk dataset is similar to the Persona-chat dataset, while one difference is that Speaker achieves the highest Dist-1 score among all the methods. By carefully analyzing the generated cases, we find all non-meta-learning methods (Pretrain-Only and Finetune) consistently produce random word sequences, which means they completely fail in the few-shot setting on this task. However, meta-learning-based methods survive. Task Consistency. On both datasets, Finetune methods make no significant differences on C score, E-acc and Task Consistency when compared with Pretrain-Only methods, which means that simple fine-tuning is useless for improving the task consistency. All meta-learning methods including MAML and CMAML outperforms Finetune. Compared with MAML-Seq2seq and MAMLSeq2SPG, CMAML-Seq2SPG obtain 22.2%/12.5% and 11.8%/5.6% improvement on C score and Eacc. It means that the private modules in CMAMLSeq2SPG are well pruned to better well describes the unique characteristics of each task. We also observe that in MojiTalk, CMAMLSeq2SPG achieves good improvement compared with other baselines on the BLEU score but a lim5839 ited improvement on E-acc and task consistency score when compared with Persona-chat. This tells that when the training data is limited, the generative models tend to focus on the correctness of the response rather than the task consistency. By jointly analyzing the response quality and task consistency measurement, we can easily draw the conclusion that the responses produced by our algorithm in CMAML-Seq2SPG not only is superior in response quality but also caters to the characteristics of the corresponding task. Model Differences. Even though a high difference score among tasks does not indicate each model has captured its unique characteristics, a set of models that can capture the characteristics of themselves will have a higher different score. Hence, we present the difference scores of competing methods as a reference index. In Table 1, we can see that fine-tuning on non-meta-learning methods (Pretrain-Only and Finetune) does not boost the model differences between tasks. MAML helps to increase the model differences but is not as good as the proposed CMAML methods. CMAMLSeq2SPG achieves the highest model difference scores on two datasets as it distinguishes different tasks in both parameter and model structure level. A higher ∆score of a method means its produced dialogue models are more easy to finetune. All non-meta-learning methods have so much lower ∆scores than MAML methods. CMAMLSeq2SPG has the highest scores on both datasets, indicating that the active edges in the private module are more likely to be fine-tuned to better fit the corpus of the corresponding tasks. We also observe that CMAML-Seq2SP′G has relatively low ∆ scores, which indicates its base generation model Seq2S′G is not as good as Seq2SPG. 5.6 Impact Factors We further examine two factors that may have a great impact on the performance: the quantity of training data and the similarity among tasks. Few-shot Settings. We only use Persona-chat dataset for analysis, because MojiTalk has too little data to further decrease. In Persona-chat, each user has 121 training samples on average, and we evaluate all the methods in a 100 and 110 samples setting (both in train and test) in Table 2 because all the methods tend to produce random sequences when each task contains less than 100 samples. For non-meta-learning methods including Persona I also love vintage cars. I am a pediatrician. I love running and reading. Query Singing karaoke is a talent of mine. Do you sing too? Response Not really. I am into running, books and old cars. Seq2seq I do not have any pets. I do not have any pets. Speaker No, I do not. I do not have any. Seq2SPG No , I do not have any pets. Seq2seq-F I do not have any pets. I do not have any pets. Speaker-F No, I do not. I do not have any. Seq2SPG-F No , I do not have any pets. MAML-Seq2seq Yes I do. I am a nurse. MAML-Seq2SPG I like to listen to jazz and jazz . CMAML-Seq2SP′G Yes, I am a doctor. I am a pediatrician. CMAML-Seq2SPG Yes, I am a pediatrician. What do you do for a living? Table 3: A case in Persona-chat dataset. Pretrain-Only and Finetune, the quality scores improve as the quantity of training data increases, while the C scores almost remain the same as these methods are not sensitive to the differences among tasks. MAML methods have not changed too much on BLEU scores along with the data growth, but its C scores keep increasing. Both the BLEU score and C score of CMAML-Seq2SPG keep increasing with the data growth, and it always achieves the best performance among all the tasks. This proves that the customized generative models are suitable for the corresponding tasks and can always take the full potential of the training data. Task Similarity. Again, we only use the Personachat dataset because we cannot define similarities among emojis. We construct two datasets: one contains 100 similar users and another contains 100 dissimilar users (both in train and test). The performance of all the methods is close to each other in the similar-user setting. It means meta-learning-based methods have no advantage for similar tasks. In the dissimilar-users setting, CMAML-Seq2SPG performs best on the C score and BLEU. We draw a conclusion that user similarity influences the performance of our model. Compared to that in dissimilar-users setting, the BLEU in the similar-users setting is high, but the C score is low. The possible reason is that generative models do not distinguish similar tasks and regard all tasks as one task in training. 5.7 Case Study We only present one case in the Persona-chat dataset due to the limited space in Table 3. Pretrain-Only and Finetune methods produce general responses with less information. MAML methods tend to generate diverse responses as their initial parameters are easier to be finetuned. Even 5840 though the user profiles are not used for training, CMAML-Seq2SPG can quickly learn the persona information “pediatrician” from its training dialogues while other baselines can not. From another perspective, the pruned private module in CMAMLSeq2SPG can be regarded as a special memory that stores the task-specific information without explicit definition of memory cells. 6 Conclusion In this paper, we address the problem of the fewshot dialogue generation. We propose CMAML, which is able to customize unique dialogue models for different tasks. CMAML introduces a private network for each task’s dialogue model, whose structure will evolve during the training to better fit the characteristics of this task. The private module will only be trained on the corpora of the corresponding task and its similar tasks. The experiment results show that CMAML achieves the best performance in terms of response quality, diversity and task consistency. We also measure the model differences among tasks, and the results prove that CMAML produces diverse dialogue models for different tasks. 7 Acknowledgement This paper is partially supported by National Key Research and Development Program of China with Grant No. 2018AAA0101900/2018AAA0101902, Beijing Municipal Commission of Science and Technology under Grant No. Z181100008918005, and the National Natural Science Foundation of China (NSFC Grant No. 61772039, No. 91646202, and No. 61876196). References Christoph Alt, Marc H¨ubner, and Leonhard Hennig. Fine-tuning pre-trained transformer language models to distantly supervised relation extraction. In ACL, pages 1388–1398, 2019. Christoph Alt, Marc H¨ubner, and Leonhard Hennig. Fine-tuning pre-trained transformer language models to distantly supervised relation extraction. In ACL, pages 1388–1398, 2019. Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. Learning to learn by gradient descent by gradient descent. In NeuralPS, pages 3981–3989, 2016. Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. Trapit Bansal, Rishikesh Jha, and Andrew McCallum. Learning to few-shot learn across diverse natural language classification tasks. arXiv preprint arXiv:1911.03863, 2019. Andrew Brock, Theodore Lim, James Millar Ritchie, and Nicholas J Weston. Smash: One-shot model architecture search through hypernetworks. In ICLR, 2018. Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder–decoder approaches. In SSST-8, pages 103–111, 2014. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, pages 1126–1135, 2017. Sepp Hochreiter and J¨urgen Schmidhuber. Long shortterm memory. Neural computation, 9(8):1735– 1780, 1997. Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146, 2018. Po-Sen Huang, Chenglong Wang, Rishabh Singh, Wen tau Yih, and Xiaodong He. Natural language to structured query generation via meta-learning. In NAACL-HLT, pages 732–738, 2018. Xiang Jiang, Mohammad Havaei, Gabriel Chartrand, Hassan Chouaib, Thomas Vincent, Andrew Jesson, Nicolas Chapados, and Stan Matwin. Attentive taskagnostic meta-learning for few-shot text classification. arXiv preprint arXiv:1805.07722, 2018. Nikita Kitaev Steven Cao Dan Klein. Multilingual constituency parsing with self-attention and pre-training. In ACL, pages 3499–3505, 2019. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. A diversity-promoting objective function for neural conversation models. In NAACL, pages 110–119, 2016. Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. A personabased neural conversation model. In ACL, pages 994–1003, 2016. Luchen Liu, Zequn Liu, Haoxian Wu, Zichang Wang, Jianhao Shen, Yiping Song, and Ming Zhang. Multitask learning via adaptation to similar tasks for mortality prediction of diverse rare diseases. In arXiv preprint arXiv:2004.05318, 2020. 5841 Andrea Madotto, Zhaojiang Lin, Chien-Sheng Wu, and Pascale Fung. Personalizing dialogue agents via meta-learning. In ACL, pages 5454–5459, 2019. Fei Mi, Minlie Huang, Jiyong Zhang, and Boi Faltings. Meta-learning for low-resource natural language generation in task-oriented dialogue systems. In IJCAI, pages 3151–3157, 2019. Tong Niu and Mohit Bansal. Adversarial oversensitivity and over-stability strategies for dialogue models. In CoNLL, pages 486–496, 2018. Abiola Obamuyide and Andreas Vlachos. Metalearning improves lifelong relation extraction. In ACL, pages 224–229, 2019. Abiola Obamuyide and Andreas Vlachos. Modelagnostic meta-learning for relation classification with limited supervision. In ACL, pages 5873–5879, 2019. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. Bleu: a method for automatic evaluation of machine translation. In ACL, pages 311–318, 2002. Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In EMNLP, pages 1532–1543, 2014. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018. Kun Qian and Zhou Yu. Domain adaptive dialog generation via meta learning. In ACL, pages 2639–2649, 2019. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. URL: https://s3-us-west-2. amazonaws. com/openaiassets/researchcovers/languageunsupervised/language understanding paper. pdf, 2018. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Metalearning with memory-augmented neural networks. In ICML, pages 1842–1850, 2016. Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In NeuralPS, pages 4077–4087, 2017. Yiping Song, Zhiliang Tian, Dongyan Zhao, Ming Zhang, and Rui Yan. Diversifying neural conversation model with maximal marginal relevance. In IJCNLP, pages 169–174, 2017. Yiping Song, Rui Yan, Yansong Feng, Yaoyuan Zhang, Dongyan Zhao, and Ming Zhang. Towards a neural conversation model with diversity net using determinantal point processes. In AAAI, pages 5932–5939, 2018. Qianru Sun, Yaoyao Liu, Tat-Seng Chua, and Bernt Schiele. Meta-transfer learning for few-shot learning. In CVPR, pages 403–412, 2019. Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: Relation network for few-shot learning. In CVPR, pages 1199–1208, 2018. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In NeuralPS, pages 3104–3112, 2014. Zhiliang Tian, Rui Yan, Lili Mou, Yiping Song, Yansong Feng, and Dongyan Zhao. How to make context more useful? an empirical study on contextaware neural conversational models. In ACL, volume 2, pages 231–236, 2017. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In NeuralPS, pages 3630–3638, 2016. Haoyu Wang, Ming Tan, Mo Yu, Shiyu Chang, Dakuo Wang, Kun Xu, Xiaoxiao Guo, and Saloni Potdar. Extracting multiple-relations in one-pass with pretrained transformers. In ACL, pages 1371–1377, 2019. Haoyu Wang, Ming Tan, Mo Yu, Shiyu Chang, Dakuo Wang, Kun Xu, Xiaoxiao Guo, and Saloni Potdar. Extracting multiple-relations in one-pass with pretrained transformers. In ACL, pages 1371–1377, 2019. Zhi-Xiu Ye and Zhen-Hua Ling. Multi-level matching and aggregation network for few-shot relation classification. In ACL, pages 2872–2881, 2019. Mo Yu, Xiaoxiao Guo, Jinfeng Yi, Shiyu Chang, Saloni Potdar, Yu Cheng, Gerald Tesauro, Haoyu Wang, and Bowen Zhou. Diverse few-shot text classification with multiple metrics. In NAACL-HLT, pages 1206–2015, 2019. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. Personalizing dialogue agents: I have a dog, do you have pets too? In ACL, pages 2204–2213, 2018. Xianda Zhou and William Yang Wang. Mojitalk: Generating emotional responses at scale. In ACL, pages 1128–1137, 2018. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. Emotional chatting machine: Emotional conversation generation with internal and external memory. In AAAI, pages 730–738, 18. Luisa Zintgraf, Kyriacos Shiarli, Vitaly Kurin, Katja Hofmann, and Shimon Whiteson. Fast context adaptation via meta-learning. In ICML, pages 7693– 7702, 2019.
2020
517
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5842–5848 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5842 Video-Grounded Dialogues with Pretrained Generation Language Models Hung Le†∗, Steven C.H. Hoi†‡ †Singapore Management University [email protected] ‡Salesforce Research Asia [email protected] Abstract Pre-trained language models have shown remarkable success in improving various downstream NLP tasks due to their ability to capture dependencies in textual data and generate natural responses. In this paper, we leverage the power of pre-trained language models for improving video-grounded dialogue, which is very challenging and involves complex features of different dynamics: (1) Video features which can extend across both spatial and temporal dimensions; and (2) Dialogue features which involve semantic dependencies over multiple dialogue turns. We propose a framework by extending GPT-2 models to tackle these challenges by formulating videogrounded dialogue tasks as a sequence-tosequence task, combining both visual and textual representation into a structured sequence, and fine-tuning a large pre-trained GPT-2 network. Our framework allows fine-tuning language models to capture dependencies across multiple modalities over different levels of information: spatio-temporal level in video and token-sentence level in dialogue context. We achieve promising improvement on the AudioVisual Scene-Aware Dialogues (AVSD) benchmark from DSTC7, which supports a potential direction in this line of research. 1 Introduction Recent work in large-scale pre-training transformerbased neural networks (Liu et al., 2019; Devlin et al., 2019; Radford et al., 2019) has boosted the performance in various NLP tasks. The transformer-based architecture of these models allows them to capture various dependencies when trained on very large datasets. The pre-trained models are adapted into downstream tasks to generate text that is more natural, fluent, and richer than ∗This work was mostly done when Hung Le was an intern at Salesforce Research Asia, Singapore. models not initialized with pre-trained weights. Similar to pre-trained CNN-based neural networks developed in computer vision research (He et al., 2016; Huang et al., 2017) which can learn highresolution features in images, pre-trained language models (LMs) are capable of capturing fine-grain textual dependencies in text data of rich semantics. While the benefits of pre-trained language models present in many downstream NLP tasks such as machine translation and question answering (QA) (Devlin et al., 2019; Lan et al., 2020), they are particularly suitable to adapt to dialogue response generation tasks for two major reasons: (1) Dialogue response generation usually involves more complex dynamics between input and output text sequences. The input typically involves dialogue history, including conversational exchanges between users and dialogue agents. A dialogue agent needs to capture relevant dependencies along each dialogue turns to generate a sensible response. (2) Compared to other NLP tasks, it is very challenging to collect and create large-scale dialogue datasets. Adopting pre-training approaches could ameliorate the limited dialogue datasets by leveraging rich linguistic dependencies learned from other available text data. We are motivated by these observations to adapt pre-trained language models into a dialogue task and improve the quality of generated responses. Along the line of research that combines both vision and language (Antol et al., 2015; Hori et al., 2019), transformer-based neural networks can also be applied to capture various dependencies across different types of input modalities (text and image) with appropriate objective loss functions (Alberti et al., 2019; Su et al., 2020; Chen et al., 2019). The multi-head attention mechanism of these models can detect long-range dependencies between each token in the input text and each image patch or spatial objects in the input image. We extend this framework to a video-dialogue task and fully lever5843 Video how man y peo ple are in the vide o SEP SEP ther e pers on one just is Pre-trained Transformer usr usr usr usr usr usr usr sys usr sys sys sys sys sys Dialogue History Response 8 9 10 11 12 13 14 7 40 Word Level Modality Level Position Level ther e is pers on one just 5 5 5 5 5 5 5 5 5 5 5 5 5 5 Turn Level SEP sys 2 3 4 5 6 1 1 a man is stan ding SEP cap cap cap cap cap 0 0 0 0 0 42 43 44 45 41 Caption 52 vis vis vis 47 48 49 50 51 46 -1 -1 -1 vis vis vis vis vis -2 -2 -2 -1 -2 53 Spatial Level Modality Level Position Level Temporal Level MASK ... ... ... ... MASK EOS ... MASK MASK MASK MASK MASK MASK MASK MASK MASK MASK MASK MASK MASK MASK MASK MASK MASK MASK MASK MASK t=0 t=T Figure 1: The proposed VGD-GPT2 architecture for video-grounded dialogues based on the pre-trained transformer model (GPT-2). The video and text input are combined together over multiple encoding layers to inject different attributes to encoded features. age the power of pre-trained models to obtain linguistic and visual representations in dialogues and videos. Specifically, we tackle the Audio-Visual Scene-aware Dialogues (AVSD) task (Hori et al., 2019) which aims to generate dialogue responses grounded on both visual and audio features of the video. The dialogue agent needs to create responses that not only match the dialogue flow but also address user questions about a given video over multiple dialogue turns. First, we detail how to formulate input components of a video-grounded dialogue as a downstream task of pre-trained language models. We follow the general sequence-to-sequence framework, whereby the input components are combined to a structured sequence of multiple modalities and output is a system response. We then apply pre-trained models (Radford et al., 2019) to leverage the deep attention neural networks to capture text and video dependencies with fine granularity. Specifically, we propose to capture dependencies between each token in text data and each spatial feature along the temporal dimension of the input video. Lastly, we present a multi-task learning framework that includes additional learning objectives in addition to dialogue response generation objective. Our promising results on the AVSD benchmark demonstrate the efficacy of our proposed framework. 2 Related Work We briefly describe related work in two major lines of research: dialogues and vision-text modeling. 2.1 Dialogue Modeling Whang et al. (2019) applies pre-trained language models for response selection tasks in open-domain dialogues. The output of the language model (e.g. [CLS] token in BERT) is used as a contextual representation of each pair of dialogue context and candidate response. Budzianowski and Vuli´c (2019) assumes access to ground-truth dialogue states and generates responses in task-oriented dialogues by combining input components into a single sequence. As dialogue states and database states are used as raw text input, the models can be fine-tuned from a deep pre-trained language model such as GPT. Chao and Lane (2019) and Lai et al. (2020) use pre-trained LMs to track dialogue states in taskoriented dialogues by utilizing the output representations to predict slot values. In this work, we aim to address video-grounded dialogue tasks and generate natural responses in an end-to-end manner. 2.2 Vision-Text Modeling The transformer-based neural architecture of pretrained language models has been used to learn cross-modal representations for vision-text NLP tasks. Li et al. (2019) uses a BERT-based architecture to improve linguistic and visual representations for image captioning tasks. Lu et al. (2019) follows a similar approach to tackle visual QA but segregates the visual and text input components rather combining both into a single sequence. Alberti et al. (2019) leverages a pre-trained BERT model to improve cross-modal representations in either early fusion or late fusion approach. We are motivated to extend this line of research to a video-based setting. Video is considered much more complicated than image due to the additional temporal variation across video frames. A related work to ours is VideoBERT (Sun et al., 2019) which utilizes BERT models for video captioning. Instead of using visual features to represent video frames, VideoBERT transforms frame-level features into visual tokens and uses them as raw text input to a BERT-based architecture. 5844 3 Method Our model architecture can be seen in Figure 1. We are inspired by Transformer-based LM approaches that leverage different levels of features in text, such as word, character, and position levels. We apply this principle and technique to overcome the challenge in AVSD which involves multi-turn dialogue input combined with video input with spatialtemporal variations. We propose to decompose videos into patches but maintain a structured temporal sequence. This sequence is then directly combined with text inputs of dialogue which are also arranged in a temporally ordered manner. This kind of feature reformulation is simple yet powerful as it allows explicit dependency learning across all pairs of text tokens and video patches. Therefore, it can facilitate stronger signals to answer human queries in greater granularities. 3.1 Model Architecture We trained a GPT model based on the GPT-2 (Radford et al., 2019) architecture. The GPT-2 model is based on the transformer network (Vaswani et al., 2017) which includes 12 to 24 layers of masked multi-head attention on very large text data. Following the success of GPT-2 in generation-based tasks, we adapt the power of GPT-2 pre-trained models to generate video-grounded dialogue responses and call our framework “VGD-GPT2”. First, we modify the input components as a long sequence of video frames or video segments and dialogue turns. Video Representations. Each video frame or video segment is further structured as a sequence of spatial regions, which can be extracted using a pre-trained video model. For an input video V , we denote the output of a pre-trained 2D CNN or 3D CNN video model as Zpre V ∈RF×P×demb where demb is the feature dimension of the pre-trained video model, F is the resulting number of sampled video frames or video segments, and P is the number of spatial regions in each video frame. We reshape ZV as a sequence of image patches and pass it through a linear transformation with ReLU activation to match the feature dimension d of pretrained language model: Zspatial V = ReLU(Zpre V WV ) ∈RFP×d (1) where WV ∈Rdemb×d. We denote this as spatiallevel features of input video. As can be seen from Figure 1, we inject different types of input attributes into XV by adding three additional encoding layers: (1) Modality-level encoding that informs the type of information. We use a modality token “vis” to uniformly represent visual information type. (2) Temporal-level encoding that informs model the frame-level (or segment-level) position of input features. (3) Position-level encoding that incorporates the spatial-level ordering. This is equivalent to the positional encoding of tokens in sentences seen in BERT-based language models. All the three layers are trainable parameters to enable models to learn the dynamics of input features. All encoding layers are modeled to have the same feature dimension d of the pre-trained model. We combine all encoding layers through element-wise summation, resulting in a rich video representation: ZV = Zspatial V + Zmod V + Ztemporal V + Zpos V (2) Text Representations. Similarly, we break down dialogue history H as sequence of dialogue turns H = (H1, H2, ..., Ht) where t is the current dialogue turn. Each dialogue turn is represented as a pair of user utterance U and system response S concatenated sequentially H = ((U1, S1), (U2, S2), ..., Ut)) (St is the target response that need to be generated by the models). Each utterance is then represented as a sequence of tokens x so the dialogue history can be represented as XH = (x1, x2, ..., xLH) and Y = St = (y1, y2, ..., yLY ) where LH and LY are the total number of tokens in the dialogue history and target response respectively. Following the AVSD setting (Hori et al., 2019), we utilize the text input of video caption C. The video caption typically provides a linguistic summary of the video in one or two sentences. The caption can be represented as a sequence of tokens XC = (x1, x2, ..., xLC). We combine all text input sequences to form a single sequence XT = (XC, XH, Y−1) as input to the models. Y−1 is the target response sequence shifted left by one position to enable auto-regressive prediction of output tokens. We denote embedded features as Ztoken T as the token-level encoding layer of the text input. Similar to video features, we add additional layers to inject different attributes of XT (See Figure 1): (1) Modality-level encoding that differentiates 5845 segments in XT. We use 3 different modality tokens: “cap”, “sys”, and “usr” to specify whether the token in the corresponding position is part of input caption, system responses, or user utterances. (2) Turn-level encoding that encodes the turn number of the token in the corresponding position. (3) Position-level encoding that is used to inject signals of the token ordering. Similar to video representation, the encoded input is combined through element-wise summation: ZT = Ztoken T + Zmod T + Zturn T + Zpos T (3) We concatenated both ZV and ZT to create a single input sequence ZV T of length (F ×P +LC +LH + LY ) and embedding dimension d. ZV T is used as input to a pre-trained GPT-2 for fine-tuning. 3.2 Optimization Following a similar strategy adopted by Wolf et al. (2019), we fine-tune the models in a multi-task setting with the following objectives: (1) Response Generation: this is a typical objective function that maximizes the likelihood of output target response conditioned on the source sequence. (2) Masked Multi-modal Modeling: we explore two loss functions: masked language modeling (MLM) and masked visual modeling (MVM). We mask both tokens and spatial regions in video frames in training instances and require the model to regenerate them with the remaining inputs. MLM is learned similarly as response generation by passing through a linear layer with softmax. MVM is learned by minimizing the L1 loss in feature space between the output representation of the masked visual region and the original input representation. Both are passed through a linear transformation to the same dimensional space. This is similar to the perceptual loss proposed by (Johnson et al., 2016; Dosovitskiy and Brox, 2016) for image style transfer and image resolution tasks. We follow BERT (Devlin et al., 2019) and replace about 15% of tokens and image region inputs in each training instance at random with a [MASK] token. The corresponding output representations are then used to recover the original tokens or image regions. (3) Matching Video-Text Pair (MVT): for about 15% of training instances, we adapt the pretrained language model to the dialogue domain by replacing the original input with an incorrect dialogue or video input at random. We use a special token [CLS] concatenated to the input sequence to learn the contextual representation. The vector integrates contextual cues through Transformer attention layers and the corresponding output representation is used to predict if the input video-text pair is correct. 4 Experiments 4.1 Experimental Testbed and Setup We use the open-source implementation of the GPT2 architecture and obtain pre-trained model checkpoints 1. We experiment with two pre-trained GPT2 models: small (S) and medium (M) (Radford et al., 2019). We use Adam optimizer with a learning rate of 5e-5 based on grid search. We adopt a learning rate decay schedule as similarly used by Vaswani et al. (2017). we set the weight on the response generation loss to be 1.5 times higher than the other losses. We experiment with the the video-grounded dialogue task in the large-scale AVSD benchmark in DSTC7 (Hori et al., 2019). The AVSD benchmark contains dialogues grounded on the Charades videos (Sigurdsson et al., 2016). Each dialogue consists of up to 10 dialogue turns, each turn including a user utterance and system response (See Table 1 for more details of the dataset). To extract visual features, we used the 3D CNNbased ResNext-101 (Xie et al., 2017) pre-trained on Kinetics (Hara et al., 2018) to obtain the spatiotemporal video features. We fixed the batch size to 16 and the maximum sequence length compatible with the corresponding GPT2 models. We sampled video features every 16 frames without overlapping. We trained up to 50 epochs on 4 GPUs. We report the objective scores, including BLEU, METEOR, ROUGE-L, and CIDEr. We compare system-generated responses with 6 reference ground-truth responses. # Train Val. Test Dialogs 7,659 1,787 1,710 Turns 153,180 35,740 13,490 Words 1,450,754 339,006 110,252 Table 1: Summary of DSTC7 AVSD. 4.2 Results We compare the proposed VGD-GPT2 model with the following baseline models: (1) Baseline (Hori et al., 2019) proposes a novel 1https://github.com/huggingface/ transfer-learning-conv-ai 5846 Model Spatial Temporal MLM MVM MVT BLEU1 BLEU2 BLEU3 BLEU4 METEOR ROUGE-L CIDEr Baseline ✓ 0.626 0.485 0.383 0.309 0.215 0.487 0.746 AVSD Winner ✓ 0.718 0.584 0.478 0.394 0.267 0.563 1.094 MTN ✓ 0.731 0.597 0.490 0.406 0.271 0.564 1.127 VGD-GPT2 (S) ✓ ✓ ✓ ✓ ✓ 0.750 0.621 0.516 0.433 0.283 0.581 1.196 VGD-GPT2 (S) ✓ ✓ ✓ ✓ 0.753 0.619 0.512 0.424 0.280 0.571 1.185 VGD-GPT2 (S) ✓ ✓ ✓ ✓ 0.750 0.616 0.511 0.427 0.280 0.579 1.188 VGD-GPT2 (S) ✓ ✓ ✓ ✓ 0.745 0.613 0.508 0.423 0.281 0.579 1.173 VGD-GPT2 (S) ✓ ✓ ✓ ✓ 0.749 0.613 0.505 0.419 0.274 0.571 1.153 VGD-GPT2 (S) ✓ ✓ ✓ ✓ 0.744 0.612 0.505 0.421 0.281 0.581 1.192 VGD-GPT2 (M) ✓ ✓ ✓ ✓ ✓ 0.749 0.620 0.520 0.436 0.282 0.582 1.194 Table 2: Evaluation on the AVSD benchmark of baselines and different variants of VGD-GPT2 based on: (1) video features in spatial or temporal (or both) dimension and (2) fine-tuning objective functions: MLM - masked language modeling, MVM: mask visual modeling, and MVT - matching video-text pair. sequence-to-sequence approach with questionguided LSTM on both video visual and audio temporal features. Dialogue history is encoded by a hierarchical LSTM and the final representation is concatenated with question and video representations as input to decode dialog responses. (2) AVSD Winner (Sanabria et al., 2019) extends the previous work with more refined visual features and transfer learning from a video summary task. (3) MTN (Le et al., 2019) adopts a transformerbased approach with question-guided attention on visual features formulated as an auto-encoding module. Table 2 shows the details of our results. Our VGD-GPT2 model outperforms the existing approaches across all the automated metrics. The results show that fine-tuning a language model with video-grounded dialogues can help to generate quality responses and improve model performance. By initializing our models with a language model pre-trained on massive text data, we obtain richer feature representations that capture more complex dependencies between inputs. Compared with the baseline with Transformerbased neural networks (Le et al., 2019), our model treats both visual and text features with equal importance at different levels of different dimensions. Specifically, we aligned the token level with spatial level and turn level with temporal level between visual and text features. By contrast, MTN only considers the temporal variation of the visual features and mainly focuses on text-based attention. Our early fusion strategy with a multi-level alignment approach of multi-modal inputs allows higher resolution relations between all feature representations in later layers of neural networks. 4.3 Ablation Analysis Besides, Table 2 also shows that fine-tuning a pretrained model with both spatial-temporal information and multi-task objectives can benefit the main task of response generation. To obtain spatial-only and temporal-only features, we follow a similar approach from (Jang et al., 2017) by using average pooling to pool the visual features along the temporal or spatial dimensions. Considering CIDEr as the evaluation measure, learning dependencies in both spatial and temporal dimensions can improve the performance by 0.01 absolute score from spatial-only feature and 0.008 absolute score from temporal-only feature. Our proposed auxiliary objectives also help to improve model performance by adapting the pretrained model to the current data domain, videobased dialogues. MLM and MVM are used to improve learning of local dependencies in token and spatial levels, while MVT is used to support learning global dependencies between text and visual modalities. We observe that adding MVM objective function can increase the CIDEr score the most, by 0.043 absolute score, as compared to adding MVT (0.023 absolute score) or MLM (0.004 absolute score) objective function. We also found moderate performance improvements in BLEU3, BLEU4, and ROUGE-L, when increasing GPT-2 from small to medium size. We note that the increasing model parameters in GPT2 may require longer fine-tuning procedure or a larger dialogue training dataset to fully optimize the models in the dialogue domain. 5 Conclusions In this work, we leverage pre-trained language models for a video-grounded dialogue task. We propose a sequence-to-sequence framework and a multitask fine-tuning approach to adapt the pre-trained models to the video dialogue domain. Despite using GPT-2 models, our framework can be extended with other language models and similarly adopted to improve other multi-modal dialogues. Our early fusion strategy effectively unifies different levels of features in both dialogues and video without complicating the network architecture 5847 References Chris Alberti, Jeffrey Ling, Michael Collins, and David Reitter. 2019. Fusion of detected objects in text for visual question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2131–2140, Hong Kong, China. Association for Computational Linguistics. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425–2433. Paweł Budzianowski and Ivan Vuli´c. 2019. Hello, it’s GPT-2 - how can I help you? towards the use of pretrained language models for task-oriented dialogue systems. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 15–22, Hong Kong. Association for Computational Linguistics. Guan-Lin Chao and Ian Lane. 2019. Bert-dst: Scalable end-to-end dialogue state tracking with bidirectional encoder representations from transformer. arXiv preprint arXiv:1907.03040. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. Uniter: Learning universal image-text representations. arXiv preprint arXiv:1909.11740. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Alexey Dosovitskiy and Thomas Brox. 2016. Generating images with perceptual similarity metrics based on deep networks. In Advances in neural information processing systems, pages 658–666. Kensho Hara, Hirokatsu Kataoka, and Yutaka Satoh. 2018. Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet? In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 6546–6555. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. C. Hori, H. Alamri, J. Wang, G. Wichern, T. Hori, A. Cherian, T. K. Marks, V. Cartillier, R. G. Lopes, A. Das, I. Essa, D. Batra, and D. Parikh. 2019. Endto-end audio visual scene-aware dialog using multimodal attention-based video features. In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2352–2356. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708. Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, and Gunhee Kim. 2017. Tgif-qa: Toward spatiotemporal reasoning in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2758–2766. Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision, pages 694–711. Springer. Tuan Manh Lai, Quan Hung Tran, Trung Bui, and Daisuke Kihara. 2020. A simple but effective bert model for dialog state tracking on resource-limited systems. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8034–8038. IEEE. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In International Conference on Learning Representations. Hung Le, Doyen Sahoo, Nancy Chen, and Steven Hoi. 2019. Multimodal transformer networks for end-toend video-grounded dialogue systems. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5612–5623, Florence, Italy. Association for Computational Linguistics. Gen Li, Nan Duan, Yuejian Fang, Daxin Jiang, and Ming Zhou. 2019. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. arXiv preprint arXiv:1908.06066. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, pages 13–23. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. 5848 Ramon Sanabria, Shruti Palaskar, and Florian Metze. 2019. Cmu sinbads submission for the dstc7 avsd challenge. In DSTC7 at AAAI2019 workshop, volume 6. Gunnar A Sigurdsson, G¨ul Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. 2016. Hollywood in homes: Crowdsourcing data collection for activity understanding. In European Conference on Computer Vision, pages 510–526. Springer. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. Vl-bert: Pretraining of generic visual-linguistic representations. In International Conference on Learning Representations. Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. 2019. Videobert: A joint model for video and language representation learning. In The IEEE International Conference on Computer Vision (ICCV). Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Taesun Whang, Dongyub Lee, Chanhee Lee, Kisu Yang, Dongsuk Oh, and HeuiSeok Lim. 2019. Domain adaptive training bert for response selection. arXiv preprint arXiv:1908.04812. Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. arXiv preprint arXiv:1901.08149. Saining Xie, Ross Girshick, Piotr Doll´ar, Zhuowen Tu, and Kaiming He. 2017. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1492–1500.
2020
518
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5849–5859 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5849 A Unified MRC Framework for Named Entity Recognition Xiaoya Li♣, Jingrong Feng♣, Yuxian Meng♣, Qinghong Han♣, Fei Wu♠and Jiwei Li♠♣ ♠Department of Computer Science and Technology, Zhejiang University ♣Shannon.AI {xiaoya li, jingrong feng, yuxian meng,qinghong han}@shannonai.com [email protected], jiwei [email protected] Abstract The task of named entity recognition (NER) is normally divided into nested NER and flat NER depending on whether named entities are nested or not. Models are usually separately developed for the two tasks, since sequence labeling models are only able to assign a single label to a particular token, which is unsuitable for nested NER where a token may be assigned several labels. In this paper, we propose a unified framework that is capable of handling both flat and nested NER tasks. Instead of treating the task of NER as a sequence labeling problem, we propose to formulate it as a machine reading comprehension (MRC) task. For example, extracting entities with the PER(PERSON) label is formalized as extracting answer spans to the question “which person is mentioned in the text”.This formulation naturally tackles the entity overlapping issue in nested NER: the extraction of two overlapping entities with different categories requires answering two independent questions. Additionally, since the query encodes informative prior knowledge, this strategy facilitates the process of entity extraction, leading to better performances for not only nested NER, but flat NER. We conduct experiments on both nested and flat NER datasets. Experiment results demonstrate the effectiveness of the proposed formulation. We are able to achieve a vast amount of performance boost over current SOTA models on nested NER datasets, i.e., +1.28, +2.55, +5.44, +6.37,respectively on ACE04, ACE05, GENIA and KBP17, as well as flat NER datasets, i.e., +0.24, +1.95, +0.21, +1.49 respectively on English CoNLL 2003, English OntoNotes 5.0, Chinese MSRA and Chinese OntoNotes 4.0. The code and datasets can be found at https://github.com/ShannonAI/ mrc-for-flat-nested-ner. Figure 1: Examples for nested entities from GENIA and ACE04 corpora. 1 Introduction Named Entity Recognition (NER) refers to the task of detecting the span and the semantic category of entities from a chunk of text. The task can be further divided into two sub-categories, nested NER and flat NER, depending on whether entities are nested or not. Nested NER refers to a phenomenon that the spans of entities (mentions) are nested, as shown in Figure 1. Entity overlapping is a fairly common phenomenon in natural languages. The task of flat NER is commonly formalized as a sequence labeling task: a sequence labeling model (Chiu and Nichols, 2016; Ma and Hovy, 2016; Devlin et al., 2018) is trained to assign a single tagging class to each unit within a sequence of tokens. This formulation is unfortunately incapable of handling overlapping entities in nested NER (Huang et al., 2015; Chiu and Nichols, 2015), where multiple categories need to be assigned to a single token if the token participates in multiple entities. Many attempts have been made to reconcile sequence labeling models with nested NER (Alex et al., 2007; Byrne, 2007; Finkel and Manning, 2009; Lu and Roth, 2015; Katiyar and Cardie, 2018), mostly based on the pipelined systems. However, pipelined systems suffer from the disadvantages of error propagation, long running time and the intensiveness in developing hand-crafted features, etc. Inspired by the current trend of formalizing 5850 NLP problems as question answering tasks (Levy et al., 2017; McCann et al., 2018; Li et al., 2019), we propose a new framework that is capable of handling both flat and nested NER. Instead of treating the task of NER as a sequence labeling problem, we propose to formulate it as a SQuADstyle (Rajpurkar et al., 2016, 2018) machine reading comprehension (MRC) task. Each entity type is characterized by a natural language query, and entities are extracted by answering these queries given the contexts. For example, the task of assigning the PER(PERSON) label to “[Washington] was born into slavery on the farm of James Burroughs” is formalized as answering the question “which person is mentioned in the text?”. This strategy naturally tackles the entity overlapping issue in nested NER: the extraction of two entities with different categories that overlap requires answering two independent questions. The MRC formulation also comes with another key advantage over the sequence labeling formulation. For the latter, golden NER categories are merely class indexes and lack for semantic prior information for entity categories. For example, the ORG(ORGANIZATION) class is treated as a onehot vector in sequence labeling training. This lack of clarity on what to extract leads to inferior performances. On the contrary, for the MRC formulation, the query encodes significant prior information about the entity category to extract. For example, the query “find an organization such as company, agency and institution in the context” encourages the model to link the word “organization” in the query to location entities in the context. Additionally, by encoding comprehensive descriptions (e.g., “company, agency and institution”) of tagging categories (e.g., ORG), the model has the potential to disambiguate similar tagging classes. We conduct experiments on both nested and flat NER datasets to show the generality of our approach. Experimental results demonstrate its effectiveness. We are able to achieve a vast amount of performance boost over current SOTA models on nested NER datasets, i.e., +1.28, +2.55, +5.44, +6.37, respectively on ACE04, ACE05, GENIA and KBP17, as well as flat NER datasets, i.e., +0.24, +1.95, +0.21, +1.49 respectively on English CoNLL 2003, English OntoNotes 5.0, Chinese MSRA, Chinese OntoNotes 4.0. We wish that our work would inspire the introduction of new paradigms for the entity recognition task. 2 Related Work 2.1 Named Entity Recognition (NER) Traditional sequence labeling models use CRFs (Lafferty et al., 2001; Sutton et al., 2007) as a backbone for NER. The first work using neural models for NER goes back to 2003, when Hammerton (2003) attempted to solve the problem using unidirectional LSTMs. Collobert et al. (2011) presented a CNN-CRF structure, augmented with character embeddings by Santos and Guimaraes (2015). Lample et al. (2016) explored neural structures for NER, in which the bidirectional LSTMs are combined with CRFs with features based on character-based word representations and unsupervised word representations. Ma and Hovy (2016) and Chiu and Nichols (2016) used a character CNN to extract features from characters. Recent large-scale language model pretraining methods such as BERT (Devlin et al., 2018) and ELMo (Peters et al., 2018a) further enhanced the performance of NER, yielding state-of-the-art performances. 2.2 Nested Named Entity Recognition The overlapping between entities (mentions) was first noticed by Kim et al. (2003), who developed handcrafted rules to identify overlapping mentions. Alex et al. (2007) proposed two multi-layer CRF models for nested NER. The first model is the inside-out model, in which the first CRF identifies the innermost entities, and the successive layer CRF is built over words and the innermost entities extracted from the previous CRF to identify second-level entities, etc. The other is the outsidein model, in which the first CRF identifies outermost entities, and then successive CRFs would identify increasingly nested entities. Finkel and Manning (2009) built a model to extract nested entity mentions based on parse trees. They made the assumption that one mention is fully contained by the other when they overlap. Lu and Roth (2015) proposed to use mention hyper-graphs for recognizing overlapping mentions. Xu et al. (2017) utilized a local classifier that runs on every possible span to detect overlapping mentions and Katiyar and Cardie (2018) used neural models to learn the hyper-graph representations for nested entities. Ju et al. (2018) dynamically stacked flat NER layers in a hierarchical manner. Lin et al. 5851 (2019a) proposed the Anchor-Region Networks (ARNs) architecture by modeling and leveraging the head-driven phrase structures of nested entity mentions. Luan et al. (2019) built a span enumeration approach by selecting the most confident entity spans and linking these nodes with confidenceweighted relation types and coreferences. Other works (Muis and Lu, 2017; Sohrab and Miwa, 2018; Zheng et al., 2019) also proposed various methods to tackle the nested NER problem. Recently, nested NER models are enriched with pre-trained contextual embeddings such as BERT (Devlin et al., 2018) and ELMo (Peters et al., 2018b). Fisher and Vlachos (2019) introduced a BERT-based model that first merges tokens and/or entities into entities, and then assigned labeled to these entities. Shibuya and Hovy (2019) provided inference model that extracts entities iteratively from outermost ones to inner ones. Strakov´a et al. (2019) viewed nested NER as a sequence-tosequence generation problem, in which the input sequence is a list of tokens and the target sequence is a list of labels. 2.3 Machine Reading Comprehension (MRC) MRC models (Seo et al., 2016; Wang et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016, 2017; Wang et al., 2016; Shen et al., 2017; Chen et al., 2017) extract answer spans from a passage through a given question. The task can be formalized as two multi-class classification tasks, i.e., predicting the starting and ending positions of the answer spans. Over the past one or two years, there has been a trend of transforming NLP tasks to MRC question answering. For example, Levy et al. (2017) transformed the task of relation extraction to a QA task: each relation type R(x, y) can be parameterized as a question q(x) whose answer is y. For example, the relation EDUCATED-AT can be mapped to “Where did x study?”. Given a question q(x), if a non-null answer y can be extracted from a sentence, it means the relation label for the current sentence is R. McCann et al. (2018) transformed NLP tasks such as summarization or sentiment analysis into question answering. For example, the task of summarization can be formalized as answering the question “What is the summary?”. Our work is significantly inspired by Li et al. (2019), which formalized the task of entity-relation extraction as a multi-turn question answering task. Different from this work, Li et al. (2019) focused on relation extraction rather than NER. Additionally, Li et al. (2019) utilized a template-based procedure for constructing queries to extract semantic relations between entities and their queries lack diversity. In this paper, more factual knowledge such as synonyms and examples are incorporated into queries, and we present an in-depth analysis of the impact of strategies of building queries. 3 NER as MRC 3.1 Task Formalization Given an input sequence X = {x1, x2, ..., xn}, where n denotes the length of the sequence, we need to find every entity in X, and then assign a label y ∈Y to it, where Y is a predefined list of all possible tag types (e.g., PER, LOC, etc). Dataset Construction Firstly we need to transform the tagging-style annotated NER dataset to a set of (QUESTION, ANSWER, CONTEXT) triples. For each tag type y ∈Y , it is associated with a natural language question qy = {q1, q2, ..., qm}, where m denotes the length of the generated query. An annotated entity xstart,end = {xstart, xstart+1, · · · , xend-1, xend} is a substring of X satisfying start ≤end. Each entity is associated with a golden label y ∈Y . By generating a natural language question qy based on the label y, we can obtain the triple (qy, xstart,end, X), which is exactly the (QUESTION, ANSWER, CONTEXT) triple that we need. Note that we use the subscript “start,end” to denote the continuous tokens from index ‘start’ to ‘end’ in a sequence. 3.2 Query Generation The question generation procedure is important since queries encode prior knowledge about labels and have a significant influence on the final results. Different ways have been proposed for question generation, e.g., Li et al. (2019) utilized a template-based procedure for constructing queries to extract semantic relations between entities. In this paper, we take annotation guideline notes as references to construct queries. Annotation guideline notes are the guidelines provided to the annotators of the dataset by the dataset builder. They are descriptions of tag categories, which are described as generic and precise as possible so that 5852 Entity Natural Language Question Location Find locations in the text, including nongeographical locations, mountain ranges and bodies of water. Facility Find facilities in the text, including buildings, airports, highways and bridges. Organization Find organizations in the text, including companies, agencies and institutions. Table 1: Examples for transforming different entity categories to question queries. human annotators can annotate the concepts or mentions in any text without running into ambiguity. Examples are shown in Table 1. 3.3 Model Details 3.3.1 Model Backbone Given the question qy, we need to extract the text span xstart,end which is with type y from X under the MRC framework. We use BERT (Devlin et al., 2018) as the backbone. To be in line with BERT, the question qy and the passage X are concatenated, forming the combined string {[CLS], q1, q2, ..., qm, [SEP], x1, x2, ..., xn}, where [CLS] and [SEP] are special tokens. Then BERT receives the combined string and outputs a context representation matrix E ∈Rn×d, where d is the vector dimension of the last layer of BERT and we simply drop the query representations. 3.3.2 Span Selection There are two strategies for span selection in MRC: the first strategy (Seo et al., 2016; Wang et al., 2016) is to have two n-class classifiers separately predict the start index and the end index, where n denotes the length of the context. Since the softmax function is put over all tokens in the context, this strategy has the disadvantage of only being able to output a single span given a query; the other strategy is to have two binary classifiers, one to predict whether each token is the start index or not, the other to predict whether each token is the end index or not. This strategy allows for outputting multiple start indexes and multiple end indexes for a given context and a specific query, and thus has the potentials to extract all related entities according to qy. We adopt the second strategy and describe the details below. Start Index Prediction Given the representation matrix E output from BERT, the model first predicts the probability of each token being a start index as follows: Pstart = softmaxeach row(E · Tstart) ∈Rn×2 (1) Tstart ∈Rd×2 is the weights to learn. Each row of Pstart presents the probability distribution of each index being the start position of an entity given the query. End Index Prediction The end index prediction procedure is exactly the same, except that we have another matrix Tend to obtain probability matrix Pend ∈Rn×2. Start-End Matching In the context X, there could be multiple entities of the same category. This means that multiple start indexes could be predicted from the start-index prediction model and multiple end indexes predicted from the endindex prediction model. The heuristic of matching the start index with its nearest end index does not work here since entities could overlap. We thus further need a method to match a predicted start index with its corresponding end index. Specifically, by applying argmax to each row of Pstart and Pend, we will get the predicted indexes that might be the starting or ending positions, i.e., ˆIstart and ˆIend: ˆIstart = {i | argmax(P (i) start) = 1, i = 1, · · · , n} ˆIend = {j | argmax(P (j) end) = 1, j = 1, · · · , n} (2) where the superscript (i) denotes the i-th row of a matrix. Given any start index istart ∈ˆIstart and end index iend ∈ˆIend, a binary classification model is trained to predict the probability that they should be matched, given as follows: Pistart,jend = sigmoid(m·concat(Eistart, Ejend)) (3) where m ∈R1×2d is the weights to learn. 3.4 Train and Test At training time, X is paired with two label sequences Ystart and Yend of length n representing the ground-truth label of each token xi being the start index or end index of any entity. We therefore have the following two losses for start and end index predictions: Lstart = CE(Pstart, Ystart) Lend = CE(Pend, Yend) (4) Let Ystart, end denote the golden labels for whether each start index should be matched with each end 5853 index. The start-end index matching loss is given as follows: Lspan = CE(Pstart,end, Ystart, end) (5) The overall training objective to be minimized is as follows: L = αLstart + βLend + γLspan (6) α, β, γ ∈[0, 1] are hyper-parameters to control the contributions towards the overall training objective. The three losses are jointly trained in an end-to-end fashion, with parameters shared at the BERT layer. At test time, start and end indexes are first separately selected based on ˆIstart and ˆIend. Then the index matching model is used to align the extracted start indexes with end indexes, leading to the final extracted answers. 4 Experiments 4.1 Experiments on Nested NER 4.1.1 Datasets For nested NER, experiments are conducted on the widely-used ACE 2004, ACE 2005, GENIA and KBP2017 datasets, which respectively contain 24%, 22%, 10% and 19% nested mentions. Hyperparameters are tuned on their corresponding development sets. For evaluation, we use spanlevel micro-averaged precision, recall and F1. ACE 2004 and ACE 2005 (Doddington et al., 2005; Christopher Walker and Maeda, 2006): The two datasets each contain 7 entity categories. For each entity type, there are annotations for both the entity mentions and mention heads. For fair comparison, we exactly follow the data preprocessing strategy in Katiyar and Cardie (2018) and Lin et al. (2019b) by keeping files from bn, nw and wl, and splitting these files into train, dev and test sets by 8:1:1, respectively. GENIA (Ohta et al., 2002) For the GENIA dataset, we use GENIAcorpus3.02p. We follow the protocols in Katiyar and Cardie (2018). KBP2017 We follow Katiyar and Cardie (2018) and evaluate our model on the 2017 English evaluation dataset (LDC2017D55). Training set consists of RichERE annotated datasets, which include LDC2015E29, LDC2015E68, LDC2016E31 and LDC2017E02. We follow the dataset split strategy in Lin et al. (2019b). 4.1.2 Baselines We use the following models as baselines: • Hyper-Graph: Katiyar and Cardie (2018) proposes a hypergraph-based model based on LSTMs. • Seg-Graph: Wang and Lu (2018) proposes a segmental hypergargh representation to model overlapping entity mentions. • ARN: Lin et al. (2019a) proposes AnchorRegion Networks by modeling and levraging the head-driven phrase structures of entity mentions. • KBP17-Best: Ji et al. (2017) gives an overview of the Entity Discovery task at the Knowledge Base Population (KBP) track at TAC2017 and also reports previous best results for the task of nested NER. • Seq2Seq-BERT: Strakov´a et al. (2019) views the nested NER as a sequence-tosequence problem. Input to the model is word tokens and the output sequence consists of labels. • Path-BERT: Shibuya and Hovy (2019) treats the tag sequence as the second best path within in the span of their parent entity based on BERT. • Merge-BERT: Fisher and Vlachos (2019) proposes a merge and label method based on BERT. • DYGIE: Luan et al. (2019) introduces a general framework that share span representations using dynamically constructed span graphs. 4.1.3 Results Table 2 shows experimental results on nested NER datasets. We observe huge performance boosts on the nested NER datasets over previous state-of-the-art models, achieving F1 scores of 85.98%, 86.88%, 83.75% and 80.97% on ACE04, ACE05, GENIA and KBP-2017 datasets, which are +1.28%, +2.55%, +5.44% and +6.37% over previous SOTA performances, respectively. 4.2 Experiments on Flat NER 4.2.1 Datasets For flat NER, experiments are conducted on both English datasets i.e. CoNLL2003 and OntoNotes 5.0 and Chinese datasets i.e. OntoNotes 4.0 and MSRA. Hyperparameters are tuned on their corresponding development sets. We report span-level 5854 English ACE 2004 Model Precision Rrecall F1 Hyper-Graph (Katiyar and Cardie, 2018) 73.6 71.8 72.7 Seg-Graph (Wang and Lu, 2018) 78.0 72.4 75.1 Seq2seq-BERT (Strakov´a et al., 2019) 84.40 Path-BERT (Shibuya and Hovy, 2019) 83.73 81.91 82.81 DYGIE (Luan et al., 2019) 84.7 BERT-MRC 85.05 86.32 85.98 (+1.28) English ACE 2005 Model Precision Recall F1 Hyper-Graph (Katiyar and Cardie, 2018) 70.6 70.4 70.5 Seg-Graph (Wang and Lu, 2018) 76.8 72.3 74.5 ARN (Lin et al., 2019a) 76.2 73.6 74.9 Path-BERT (Shibuya and Hovy, 2019) 82.98 82.42 82.70 Merge-BERT (Fisher and Vlachos, 2019) 82.7 82.1 82.4 DYGIE (Luan et al., 2019) 82.9 Seq2seq-BERT (Strakov´a et al., 2019) 84.33 BERT-MRC 87.16 86.59 86.88 (+2.55) English GENIA Model Precision Recall F1 Hyper-Graph (Katiyar and Cardie, 2018) 77.7 71.8 74.6 ARN (Lin et al., 2019a) 75.8 73.9 74.8 Path-BERT (Shibuya and Hovy, 2019) 78.07 76.45 77.25 DYGIE (Luan et al., 2019) 76.2 Seq2seq-BERT (Strakov´a et al., 2019) 78.31 BERT-MRC 85.18 81.12 83.75 (+5.44) English KBP 2017 Model Precision Recall F1 KBP17-Best (Ji et al., 2017) 76.2 73.0 72.8 ARN (Lin et al., 2019a) 77.7 71.8 74.6 BERT-MRC 82.33 77.61 80.97 (+6.37) Table 2: Results for nested NER tasks. micro-averaged precision, recall and F1 scores for evaluation. CoNLL2003 (Sang and Meulder, 2003) is an English dataset with four types of named entities: Location, Organization, Person and Miscellaneous. We followed data processing protocols in Ma and Hovy (2016). OntoNotes 5.0 (Pradhan et al., 2013) is an English dataset and consists of text from a wide variety of sources. The dataset includes 18 types of named entity, consisting of 11 types (Person, Organization, etc) and 7 values (Date, Percent, etc). MSRA (Levow, 2006) is a Chinese dataset and performs as a benchmark dataset. Data in MSRA is collected from news domain and is used as shared task on SIGNAN backoff 2006. There are three types of named entities. OntoNotes 4.0 (Pradhan et al., 2011) is a Chinese dataset and consists of text from news domain. OntoNotes 4.0 annotates 18 named entity types. In this paper, we take the same data split as Wu et al. (2019). English CoNLL 2003 Model Precision Recall F1 BiLSTM-CRF (Ma and Hovy, 2016) 91.03 ELMo (Peters et al., 2018b) 92.22 CVT (Clark et al., 2018) 92.6 BERT-Tagger (Devlin et al., 2018) 92.8 BERT-MRC 92.33 94.61 93.04 (+0.24) English OntoNotes 5.0 Model Precision Recall F1 BiLSTM-CRF (Ma and Hovy, 2016) 86.04 86.53 86.28 Strubell et al. (2017) 86.84 CVT (Clark et al., 2018) 88.8 BERT-Tagger (Devlin et al., 2018) 90.01 88.35 89.16 BERT-MRC 92.98 89.95 91.11 (+1.95) Chinese MSRA Model Precision Recall F1 Lattice-LSTM (Zhang and Yang, 2018) 93.57 92.79 93.18 BERT-Tagger (Devlin et al., 2018) 94.97 94.62 94.80 Glyce-BERT (Wu et al., 2019) 95.57 95.51 95.54 BERT-MRC 96.18 95.12 95.75 (+0.21) Chinese OntoNotes 4.0 Model Precision Recall F1 Lattice-LSTM (Zhang and Yang, 2018) 76.35 71.56 73.88 BERT-Tagger (Devlin et al., 2018) 78.01 80.35 79.16 Glyce-BERT (Wu et al., 2019) 81.87 81.40 81.63 BERT-MRC 82.98 81.25 82.11 (+0.48) Table 3: Results for flat NER tasks. 4.2.2 Baselines For English datasets, we use the following models as baselines. • BiLSTM-CRF from Ma and Hovy (2016). • ELMo tagging model from Peters et al. (2018b). • CVT from Clark et al. (2018), which uses Cross-View Training(CVT) to improve the representations of a Bi-LSTM encoder. • Bert-Tagger from Devlin et al. (2018), which treats NER as a tagging task. For Chinese datasets, we use the following models as baselines: • Lattice-LSTM: Zhang and Yang (2018) constructs a word-character lattice. • Bert-Tagger: Devlin et al. (2018) treats NER as a tagging task. • Glyce-BERT: The current SOTA model in Chinese NER developed by Wu et al. (2019), which combines glyph information with BERT pretraining. 4.2.3 Results and Discussions Table 3 presents comparisons between the proposed model and baseline models. For English CoNLL 2003, our model outperforms the finetuned BERT tagging model by +0.24% in terms of F1, while for English OntoNotes 5.0, the pro5855 English OntoNotes 5.0 Model F1 LSTM tagger (Strubell et al., 2017) 86.84 BiDAF (Seo et al., 2017) 87.39 (+0.55) QAnet (Yu et al., 2018) 87.98 (+1.14) BERT-Tagger 89.16 BERT-MRC 91.11 (+1.95) Table 4: Results of different MRC models on English OntoNotes5.0. posed model achieves a huge gain of +1.95% improvement. The reason why greater performance boost is observed for OntoNotes is that OntoNotes contains more types of entities than CoNLL03 (18 vs 4), and some entity categories face the severe data sparsity problem. Since the query encodes significant prior knowledge for the entity type to extract, the MRC formulation is more immune to the tag sparsity issue, leading to more improvements on OntoNotes. The proposed method also achieves new state-of-the-art results on Chinese datasets. For Chinese MSRA, the proposed method outperforms the fine-tuned BERT tagging model by +0.95% in terms of F1. We also improve the F1 from 79.16% to 82.11% on Chinese OntoNotes4.0. 5 Ablation studies 5.1 Improvement from MRC or from BERT For flat NER, it is not immediately clear which proportion is responsible for the improvement, the MRC formulation or BERT (Devlin et al., 2018). On one hand, the MRC formulation facilitates the entity extraction process by encoding prior knowledge in the query; on the other hand, the good performance might also come from the large-scale pre-training in BERT. To separate the influence from large-scale BERT pretraining, we compare the LSTM-CRF tagging model (Strubell et al., 2017) with other MRC based models such as QAnet (Yu et al., 2018) and BiDAF (Seo et al., 2017), which do not rely on large-scale pretraining. Results on English Ontonotes are shown in Table 5. As can be seen, though underperforming BERT-Tagger, the MRC based approaches QAnet and BiDAF still significantly outperform tagging models based on LSTM+CRF. This validates the importance of MRC formulation. The MRC formulation’s benefits are also verified when comparing BERT-tagger English OntoNotes 5.0 Model F1 BERT-Tagger 89.16 Position index of labels 88.29 (-0.87) Keywords 89.74 (+0.58) Wikipedia 89.66 (+0.59) Rule-based template filling 89.30 (+0.14) Synonyms 89.92 (+0.76) Keywords+Synonyms 90.23 (+1.07) Annotation guideline notes 91.11 (+1.95) Table 5: Results of different types of queries. with BERT-MRC: the latter outperforms the former by +1.95%. We plot the attention matrices output from the BiDAF model between the query and the context sentence in Figure 2. As can be seen, the semantic similarity between tagging classes and the contexts are able to be captured in the attention matrix. In the examples, Flevland matches geographical, cities and state. 5.2 How to Construct Queries How to construct query has a significant influence on the final results. In this subsection, we explore different ways to construct queries and their influence, including: • Position index of labels: a query is constructed using the index of a tag to , i.e., ”one”, ”two”, ”three”. • Keyword: a query is the keyword describing the tag, e.g., the question query for tag ORG is “organization”. • Rule-based template filling: generates questions using templates. The query for tag ORG is “which organization is mentioned in the text”. • Wikipedia: a query is constructed using its wikipedia definition. The query for tag ORG is ”an organization is an entity comprising multiple people, such as an institution or an association.” • Synonyms: are words or phrases that mean exactly or nearly the same as the original keyword extracted using the Oxford Dictionary. The query for tag ORG is “association”. • Keyword+Synonyms: the concatenation of a keyword and its synonym. • Annotation guideline notes: is the method we use in this paper. The query for tag ORG is ”find organizations including companies, agencies and institutions”. Table 5 shows the experimental results on En5856 Figure 2: An example of attention matrices between the query and the input sentence. Models Train Test F1 BERT-tagger OntoNotes5.0 OntoNotes5.0 89.16 BERT-MRC OntoNotes5.0 OntoNotes5.0 91.11 BERT-tagger CoNLL03 OntoNotes5.0 31.87 BERT-MRC CoNLL03 OntoNotes5.0 72.34 Table 6: Zero-shot evaluation on OntoNotes5.0. BERTMRC can achieve better zero-shot performances. glish OntoNotes 5.0. The BERT-MRC outperforms BERT-Tagger in all settings except Position Index of Labels. The model trained with the Annotation Guideline Notes achieves the highest F1 score. Explanations are as follows: for Position Index Dataset, queries are constructed using tag indexes and thus do not contain any meaningful information, leading to inferior performances; Wikipedia underperforms Annotation Guideline Notes because definitions from Wikipedia are relatively general and may not precisely describe the categories in a way tailored to data annotations. 5.3 Zero-shot Evaluation on Unseen Labels It would be interesting to test how well a model trained on one dataset is transferable to another, which is referred to as the zero-shot learning ability. We trained models on CoNLL 2003 and test them on OntoNotes5.0. OntoNotes5.0 contains 18 entity types, 3 shared with CoNLL03, and 15 unseen in CoNLL03. Table 6 presents the results. As can been seen, BERT-tagger does not have zero-shot learning ability, only obtaining an accuracy of 31.87%. This is in line with our expectation since it cannot predict labels unseen from the training set. The question-answering formalFigure 3: Effect of varying percentage of training samples on Chinese OntoNotes 4.0. BERT-MRC can achieve the same F1-score comparing to BERT-Tagger with fewer training samples. ization in MRC framework, which predicts the answer to the given query, comes with more generalization capability and achieves acceptable results. 5.4 Size of Training Data Since the natural language query encodes significant prior knowledge, we expect that the proposed framework works better with less training data. Figure 3 verifies this point: on the Chinese OntoNotes 4.0 training set, the query-based BERT-MRC approach achieves comparable performance to BERT-tagger even with half amount of training data. 6 Conclusion In this paper, we reformalize the NER task as a MRC question answering task. This formalization comes with two key advantages: (1) being capa5857 ble of addressing overlapping or nested entities; (2) the query encodes significant prior knowledge about the entity category to extract. The proposed method obtains SOTA results on both nested and flat NER datasets, which indicates its effectiveness. In the future, we would like to explore variants of the model architecture. Acknowledgement We thank all anonymous reviewers, as well as Jiawei Wu and Wei Wu for their comments and suggestions. The work is supported by the National Natural Science Foundation of China (NSFC No. 61625107 and 61751209). References Beatrice Alex, Barry Haddow, and Claire Grover. 2007. Recognising nested named entities in biomedical text. In Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing, pages 65–72. Association for Computational Linguistics. Kate Byrne. 2007. Nested named entity recognition in historical archive text. In International Conference on Semantic Computing (ICSC 2007), pages 589– 596. IEEE. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051. Jason PC Chiu and Eric Nichols. 2015. Named entity recognition with bidirectional lstm-cnns. arXiv preprint arXiv:1511.08308. Jason PC Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics, 4:357–370. Julie Medero Christopher Walker, Stephanie Strassel and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia 57. Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc V. Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Procfessing, Brussels, Belgium, October 31 - November 4, 2018, pages 1914– 1925. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493–2537. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. George R Doddington, Alexis Mitchell, Mark A Przybocki, Stephanie M Strassel Lance A Ramshaw, and Ralph M Weischedel. 2005. The automatic content extraction (ace) program-tasks, data, and evaluation. In LREC, 2:1. Jenny Rose Finkel and Christopher D Manning. 2009. Nested named entity recognition. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 141–150. Association for Computational Linguistics. Joseph Fisher and Andreas Vlachos. 2019. Merge and label: A novel neural network architecture for nested NER. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5840–5850. James Hammerton. 2003. Named entity recognition with long short-term memory. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 172–175. Association for Computational Linguistics. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Heng Ji, Xiaoman Pan, Boliang Zhang, Joel Nothman, James Mayfield, Paul McNamee, and Cash Costello. 2017. Overview of TAC-KBP2017 13 languages entity discovery and linking. In Proceedings of the 2017 Text Analysis Conference, TAC 2017, Gaithersburg, Maryland, USA, November 13-14, 2017. Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1446–1459, New Orleans, Louisiana. Association for Computational Linguistics. Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 861–871. J-D Kim, Tomoko Ohta, Yuka Tateisi, and Jun’ichi Tsujii. 2003. Genia corpus—a semantically annotated corpus for bio-textmining. Bioinformatics, 19(suppl 1):i180–i182. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. 5858 Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360. Gina-Anne Levow. 2006. The third international Chinese language processing bakeoff: Word segmentation and named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 108–117, Sydney, Australia. Association for Computational Linguistics. Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. arXiv preprint arXiv:1706.04115. Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, and Jiwei Li. 2019. Entity-relation extraction as multi-turn question answering. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 1340–1350. Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2019a. Sequence-to-nuggets: Nested entity mention detection via anchor-region networks. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5182–5192. Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2019b. Sequence-to-nuggets: Nested entity mention detection via anchor-region networks. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5182–5192. Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 857–867. Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 3036–3046. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. arXiv preprint arXiv:1603.01354. Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730. Aldrian Obaja Muis and Wei Lu. 2017. Labeling gaps between words: Recognizing overlapping mentions with mention separators. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2608–2618, Copenhagen, Denmark. Association for Computational Linguistics. Tomoko Ohta, Yuka Tateisi, and Jin-Dong Kim. 2002. The genia corpus: An annotated research abstract corpus in molecular biology domain. In Proceedings of the Second International Conference on Human Language Technology Research, HLT ’02, pages 82–86, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018b. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 2227–2237. Sameer Pradhan, Mitchell P. Marcus, Martha Palmer, Lance A. Ramshaw, Ralph M. Weischedel, and Nianwen Xue, editors. 2011. Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, CoNLL 2011, Portland, Oregon, USA, June 23-24, 2011. ACL. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj¨orkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using OntoNotes. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 143– 152, Sofia, Bulgaria. Association for Computational Linguistics. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning, CoNLL 2003, Held in cooperation with HLT-NAACL 2003, Edmonton, Canada, May 31 - June 1, 2003, pages 142–147. 5859 Cicero Nogueira dos Santos and Victor Guimaraes. 2015. Boosting named entity recognition with neural character embeddings. arXiv preprint arXiv:1505.05008. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2017. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1047–1055. ACM. Takashi Shibuya and Eduard H. Hovy. 2019. Nested named entity recognition via second-best sequence learning and decoding. CoRR, abs/1909.02250. Mohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2843–2849, Brussels, Belgium. Association for Computational Linguistics. Jana Strakov´a, Milan Straka, and Jan Hajic. 2019. Neural architectures for nested NER through linearization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326–5331, Florence, Italy. Association for Computational Linguistics. Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. arXiv preprint arXiv:1702.02098. Charles Sutton, Andrew McCallum, and Khashayar Rohanimanesh. 2007. Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data. Journal of Machine Learning Research, 8(Mar):693–723. Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. arXiv preprint arXiv:1810.01817. Shuohang Wang and Jing Jiang. 2016. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905. Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context matching for machine comprehension. arXiv preprint arXiv:1612.04211. Wei Wu, Yuxian Meng, Qinghong Han, Muyu Li, Xiaoya Li, Jie Mei, Ping Nie, Xiaofei Sun, and Jiwei Li. 2019. Glyce: Glyph-vectors for chinese character representations. arXiv preprint arXiv:1901.10125. Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604. Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dcn+: Mixed objective and deep residual coattention for question answering. arXiv preprint arXiv:1711.00106. Mingbin Xu, Hui Jiang, and Sedtawut Watcharawittayakul. 2017. A local detection approach for named entity recognition and mention detection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1237–1247. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V. Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Yue Zhang and Jie Yang. 2018. Chinese ner using lattice lstm. arXiv preprint arXiv:1805.02023. Changmeng Zheng, Yi Cai, Jingyun Xu, Ho-fung Leung, and Guandong Xu. 2019. A boundary-aware neural model for nested named entity recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 357– 366, Hong Kong, China. Association for Computational Linguistics.
2020
519
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 556–566 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 556 CDL: Curriculum Dual Learning for Emotion-Controllable Response Generation Lei Shen1,2 Yang Feng1,2∗ 1Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China {shenlei17z,fengyang}@ict.ac.cn Abstract Emotion-controllable response generation is an attractive and valuable task that aims to make open-domain conversations more empathetic and engaging. Existing methods mainly enhance the emotion expression by adding regularization terms to standard cross-entropy loss and thus influence the training process. However, due to the lack of further consideration of content consistency, the common problem of response generation tasks, safe response, is intensified. Besides, query emotions that can help model the relationship between query and response are simply ignored in previous models, which would further hurt the coherence. To alleviate these problems, we propose a novel framework named Curriculum Dual Learning (CDL) which extends the emotion-controllable response generation to a dual task to generate emotional responses and emotional queries alternatively. CDL utilizes two rewards focusing on emotion and content to improve the duality. Additionally, it applies curriculum learning to gradually generate high-quality responses based on the difficulties of expressing various emotions. Experimental results show that CDL significantly outperforms the baselines in terms of coherence, diversity, and relation to emotion factors. 1 Introduction Infusing emotions into dialogue systems can make conversational agents more human-like and benefit the interaction between human and machine (Prendinger and Ishizuka, 2005; Prendinger et al., 2005; Partala and Surakka, 2004). In some reallife scenarios, we need to customize and control the agent’s emotion so that the agent can express a specific one. For example, in psychological counseling, the agent is supposed to express sadness to ∗Yang Feng is the corresponding author. show the sympathy and also convey happiness to cheer the patient up. 1 q It is very pleasant to have a cup of black tea with sugar on a cold day. (Happy) r1 [Neural] It starts to cool down today. r2 [Like] I will try, thanks for your advice. r3 [Sad] I am frozen to death ... r4 [Disgust] Winner is the worst season. r5 [Angry] You know nothing! r6 [Happy] I really like to drink black tea. 2 q So pets live better than humans now... (Sad) r1 [Disgust] You are so bad. r2 [Happy] Haha, you too. 3 q We should study hard. (Neural) r [Disgust] You are so bad. 4 q Happy birthday, Xinxin. May you be more beautiful, find a good person and get married soon! (Happy) r [Happy] Haha, you too. Table 1: Examples of emotion-controllable response generation (response emotions are denoted in brackets). Example 1 is one query and 6 emotional responses. Example 2 and 3 have different queries, but the responses generated with emotion “Disgust” are the same. Similar to Example 2 and 4 with emotion “Happy”. The emotions of queries are marked in parentheses. Recently, a framework called emotional chatting machine (ECM) (Zhou et al., 2018a) was proposed to address the emotion factor in a controlled manner, which focuses on generating a response with a specific emotion (Example 1 in Table 1). In the research field of emotion-controllable response generation, ECM and its successive methods (Colombo et al., 2019; Song et al., 2019) mainly represent the given emotion category as a vector and add it to the decoding steps to influence the procedure of response generation, which would aggravate the safe response problem. For the response generation task, safe response is notorious, as the model tends to produce some generic but meaningless responses, like “Thank you”, “I don’t know”, “Yes”, 557 etc. Due to the constraint of emotion factors, the scale of proper responses shrinks, and the model is more likely to map any query to a frequentlyoccurring response in that emotion category. That is, given “Disgust”, the response would be “You are so bad” in general, while given “Happy”, it would be “Haha, you too” (Example 2 to 4 in Table 1). Intuitively, for a good pair of query and response, they should be in a tight relationship and have equal qualities. Then, both the query-to-response mapping and response-to-query mapping would be easier and more natural. On the contrary, it is hard for a safe response to reach the original query through back-generation, neither on the content level nor the emotion level. At the same time, the difficulties of producing various emotions are different, especially in a noisy and uneven-quality dataset. Therefore, we can evaluate the response based on the feedback from the backward process to improve the coherence (Zhang et al., 2018; Cui et al., 2019; Luo et al., 2019b) and try to learn from easy to hard data to generate appropriate and emotion-rich responses. In this paper, we propose a new framework for emotion-controllable response generation named Curriculum Dual Learning (CDL). We take the learning of response and query generation with emotions as a dual task, and use the duality to model the mutual relation between them. The forward and backward models are trained alternatively via reinforcement learning (RL). Rewards designed here aim to encourage both emotion expression and content consistency. Specifically, emotion expression can be either explicit (embodied in some obvious emotion words) or implicit (reflected by the organization of the entire sentence). For example, “I am happy to meet her again” is explicit with the word “happy”, while “It seems like I have eaten the honey” is implicit, but the happiness can be felt when we consider the sentence as a whole. Based on these features, we use the accuracy of emotion classification of sentences and the proportion of emotion words as feedbacks for explicit and implicit emotions, respectively. For content consistency, we apply the reconstruction probability as the measurement of coherence (Section 3.1). Furthermore, in order to better utilize samples of multiple emotions from the noisy and uneven-quality dataset, we incorporate the curriculum learning (Section 3.2) into our dual learning framework (Section 3.3). Experimental results on both automatic and human evaluations show that for a given query and an emotion category, our CDL can successfully express desired emotion as well as keep the response informative and coherent to the query. 2 Background For emotion-controllable response generation, given a query q and an emotion category er, the goal is to generate a response r′ that is not only meaningful, but also in accordance with the desired emotion. Emotional Chatting Machine (ECM) (Zhou et al., 2018a) addresses the emotion factor using three new mechanisms: Emotion Category Embedding, Internal Memory, and External Memory. Specifically, 1) Emotion Category Embedding models the high-level abstraction of emotion expression by embedding emotion categories, and concatenates corresponding embedding to the input at each decoding step. 2) Internal Memory captures the change of implicit internal emotion states with read and write gates, 3) External Memory applies an external emotion vocabulary to express emotion explicitly, and finally assigns different generation probabilities to emotion and generic words. The loss function on one training sample (q, r) (q = q1, q2, ..., qn, r = r1, r2, ..., rm) is defined as: − m X t=1 ptlog(ot)− m X t=1 qtlog(αt)+||M I e,m||, (1) where ot and pt are the predicted token distribution and gold distribution, αt is the probability of choosing an emotion word or a generic word, qt ∈{0, 1} is the true choice between them in r, and M I e,m is the internal emotion state at the last step m. The first term is the cross-entropy loss, the second one is used to supervise the probability of selecting an emotion or generic word, and the last one is used to ensure that the internal emotion state has been expressed completely once the generation is finished. Please refer to the original paper for more details. 3 CDL for Emotion-Controllable Response Generation Since our CDL method is a combination of dual learning (DL) and curriculum learning (CL), we first present the main components of DL, including states, actions, policy and reward, then introduce 558 the plausibility of curriculum learning. Finally, we describe the training algorithm of CDL. Figure 1: The architecture of dual learning. CLS, Mf and Mb are emotion classifier, forward model and backward model, respectively. Red parts are for the forward process, while blue parts are for the backward process. 3.1 DL Architecture The architecture of DL is illustrated in Figure 1. Both the forward model Mf and the backward model Mb are ECMs with independent parameters and are initialized according to the maximum likelihood estimation (MLE). CLS is a pre-trained classifier that calculates the score of implicit emotion expression. In general, Mf generates a response r′ for a given query q and emotion category er, and then obtains the reward R that consists of Re from CLS and Rc from Mb (red parts in Figure 1). Similarly, Mb generates a query q′ for a given response r and emotion category eq, and obtains the reward R that consists of Re and Rc from CLS and Mf (blue parts in Figure 1). These two models are trained alternatively via reinforcement learning (RL). Specifically, an action is the dialogue response to generate. The action space is infinite since arbitrary-length sequences can be generated. A state is denoted by the query, which is further transformed to a vector representation by the encoder. A policy takes the form of a GRU encoder-decoder and is defined by its parameters. Following the work of Li et al. (2016c); Zhang et al. (2018), we use a stochastic representation of the policy, i.e., a probability distribution over actions given states. In order to encourage both content consistency and emotion expression, we introduce two rewards and use them to train Mf and Mb. The definition of the two rewards for model Mf is introduced as follows1. Reward for emotion expression For implicit emotion expression, a straightforward method is to employ the pre-trained classifier CLS to evaluate the emotion category of the generated response r′, and use the classification accuracy as the reward: Re1(q,r′) = p(er|r′; ϕ), (2) where ϕ is the parameter of CLS, and it is fixed during training. For explicit emotion expression, the reward is formulated as: Re2(q,r′) = n(wer)/|r′|, (3) where n(wer) is the number of emotion words belong to category er, and |r′| is the length of r′. Then, the emotion reward is defined as: Re(q,r′) = Re1(q,r′) + λRe2(q,r′), (4) where λ controls the relative importance of implicit and explicit rewards. Reward for content consistency If the response are coherent and related to the query, it will be easier to reproduce the query via back generation. Inspired by Zhang et al. (2018); Cui et al. (2019); Luo et al. (2019b), we measure the coherence by means of reconstructing q conditioned on r′. Formally, the content consistency reward is defined as: Rc(q,r′) = p(q|r′, eq; η), (5) where η is the parameter of backward model Mb, and it is fixed during the training of Mf. Overall reward We use the weighted sum of the above two rewards as the final reward: R(q,r′) = Rc(q,r′) + γRe(q,r′), (6) where γ is a hyper-parameter that controls the tradeoff between Rc(q,r′) and Re(q,r′). 3.2 Curriculum Plausibility Intuitively, learning from less noisy and evenquality dataset is simpler, but in this task, the data is inherently complicated as there are multiple emotions mixed in it. To better utilize the data, we integrate curriculum learning into the dual learning framework. The core of curriculum learning 1Rewards for model Mb can be computed in a similar way, where q′, r, b and f replace r′, q, f and b, respectively. Therefore, we omit them here for space limitation and brevity. 559 (Bengio et al., 2009) is to design an evaluation for complexity, and to provide the model with easy samples first, then gradually increase the difficulty. The curriculum is arranged by sorting each sample in training set according to a specific ranking standard. Here, We reorder samples from easy, i.e., with high accuracy of emotion classification, to hard. We consider the classification accuracy after pretraining as an indicator of the learning order. Another intuitive way is to put emotionless samples (labelled as “Neural”) first and then emotional ones, however, it exhibits poor performance in our experiments. At training step t, a batch of training samples is obtained from the top f(t) portions of the entire sorted training samples. Following Platanios et al. (2019) and Cai et al. (2020), we define the function f(t) as: f(t) ≜min(1, r t(1 −c2 0) T + c2 0), (7) where c2 0 is set to 0.01, which means that the model starts training using the 1% easiest training samples, and T is a hyper-parameter that represents the duration of curriculum learning (curriculum length). At the early stage of the training process, the model learns from the samples in the easy part of the curriculum, where there is only one emotion category. As the advance of the curriculum, the difficulty gradually increases, as complex training samples from more different categories appear. After training T batches, training sample of each batch is drawn from the whole training set, which is the same as the conventional training procedure. 3.3 Training of CDL Optimization We use the policy gradient method (Williams, 1992) to find parameters that lead to a larger expected reward. For the forward learning process, the expected reward of the generated response r′ and its approximate gradient are defined as: J(θ) = E[R(q,r′)], (8) ∇θJ(θ) ≃R′ (q,r′) · ∇θlog(pθ(r′|q, er)), (9) where θ is the parameter of forward model Mf, R′ (q,r′) = R(q,r′) −bf, and bf is the baseline value from the greedy search decoding method for Mf, which is used to reduce the variance of the estimation (Zaremba and Sutskever, 2015; Paulus et al., 2017). Analogously, for the backward learning process, the expected reward of the generated query q′ and corresponding approximate gradient are defined as: J(η) = E[R(r,q′)], (10) ∇ηJ(η) ≃R′ (r,q′) · ∇ηlog(pη(q′|r, eq)), (11) where η is the parameter of backward model Mb, R′ (r,q′) = R(r,q′) −bb, and bb is the baseline value from the greedy search decoding method for Mb. Algorithm 1 Curriculum dual learning algorithm for emotion-controllable response generation Input: The training set D = {(qi, eqi, ri, eri)} where each query-response pair is labelled with corresponding emotion labels eqi and eri Output: Mf and Mb 1: Pre-train Mf and Mb with (qi, ri, eri) and (ri, qi, eqi), respectively, based on Eq. 1 2: Pre-train CLS with (qi, eqi) and (ri, eri) 3: Sort training samples according to the ranking standard in Section 3.2 for both forward and backward learning process to get Df and Db 4: for training step t = 1, ..., T do 5: ▷Train Mf 6: Sample a batch Bft in Df based on Eq. 7 7: Sample (q, r, er) from Bft 8: Generate response r′ via Mf 9: Compute reward R based on Eq. 6 10: Update θ using R based on Eq. 9 11: Teacher Forcing: Update θ with (q, r, er) 12: ▷Train Mb 13: Sample a batch Bbt in Db based on Eq. 7 14: Sample (r, q, eq) from Bbt 15: Generate response q′ via Mb 16: Compute reward R based on Eq. 6 17: Update η using R based on Eq. 11 18: Teacher Forcing: Update η with (r, q, eq) 19: end for Teacher Forcing When Mf and Mb are trained with only the rewards from the dual tasks, the training process would easily collapse as it may find an unexpected way to achieve a high reward but fail to guarantee the fluency or readability of the generated text (Ranzato et al., 2015; Pasunuru and Bansal, 2018; Luo et al., 2019b). To stabilize the training process, after each update according to Eq. 9 or 11, Mf or Mb is exposed to real queryresponse pairs and is trained via MLE, which is also known as Teacher Forcing (Li et al., 2017; Lamb et al., 2016). The training procedure of CDL is summarized in Algorithm 1. First, we use MLE to pre-train 560 Mf, Mb and CLS with query-response pairs and emotion labels in the training set. After the pretraining phase, we sort samples in the training set following the ranking standard in Section 3.2. For forward learning process, the ranking is based on responses, while for backward learning process, it is based on queries. Then, we can get two sorted training set Df and Db for each direction. Finally, Mf and Mb are optimized with rewards and the regularization of Teacher Forcing, alternatively. 4 Experiments In this section, we conduct experiments to evaluate our proposed method. We first introduce some empirical settings, including dataset, hyperparameters, baselines, and evaluation measures. Then we illustrate our results under both automatic and human evaluations. Finally, we give out some cases generated by different models and do further analyses over our method. 4.1 Dataset We apply our method on the corpus of NLPCC 2017 Emotional Conversation Generation Challenge2, namely NLPCC2017 Dataset, which is an extension version of the dataset collected by Zhou et al. (2018a). The provided dataset is already segmented into Chinese words. There are over 1 million query-response pairs, in which both the query and response are labelled with one emotion tag among “Happy”, “Angry”, “Disgust”, “Sad”, “Like” and “Neutral”. The dataset has been tokenized into words. We randomly split the whole dataset into training/validation/test set with the number of 1,105,487/11,720/2,000. The detailed statistics of training set are shown in Table 2. Training Emotion Query Response Happy 120,358 197,528 Angry 79,611 138,198 Disgust 184,427 197,428 Sad 128,482 179,215 Like 257,471 197,565 Neutral 335,138 195,553 1,105,487 Validation 11,720 Test 2,000 Table 2: Statistics of the NLPCC2017 Dataset. In the training set, we count the number of queries and responses for each emotion category. 2http://coai.cs.tsinghua.edu.cn/hml/ challenge2017/ 4.2 Hyper-parameter Settings The settings of both Mf and Mb follow the default implementation details of original ECM paper (Zhou et al., 2018a), where the encoder and decoder have 2-layer GRU structures with 256 hidden cells for each layer, the embedding size of words and emotion categories are set to 100, and the vocabulary size is limited to 40,000. The minimum and maximum sentence length is set to 3 and 30, respectively. We train a TextCNN-based classifier (Kim, 2014) and the classification accuracy reaches 65.6% on the test set, which has the similar performance with those used by (Zhou et al., 2018a) and (Song et al., 2019). Before curriculum dual learning, model Mf and Mb are pre-trained 10 epochs via MLE. The optimizer is Adam (Kingma and Ba, 2015) with 0.05 initial learning rate for pre-training and 10−5 for curriculum dual learning. The batch size is set to 64. λ in Eq. 4 is 0.5, γ in Eq. 6 is 1 and T in Eq. 7 is 100k. During curriculum dual learning, training runs until the performance on validation set does not improve. 4.3 Baselines We compare our approach with four representative baselines: (1) S2S-Attn: The Seq2Seq model with attention mechanism as in Shang et al. (2015). (2) EmoEmb: A Seq2Seq variant which takes the embedding of emotion categories as additional input at each decoding position (Ficler and Goldberg, 2017; Li et al., 2016b). (3) EmoDS: An emotional dialogue system with lexicon-based attention and a word-based classifier (Song et al., 2019). (4) ECM: Emotional Chatting Machine proposed by Zhou et al. (2018a). Additionally, we also conduct ablation study to better analyze our method as follows: (5) CDLemo: CDL with emotion reward only; (6) CDLcon: CDL with content reward only, which is similar to the work of Zhang et al. (2018); (7) CDL-DL: CDL with both rewards but without curriculum learning. 4.4 Evaluation Measures To better evaluate our results, we use both quantitative metrics and human judgements in our experiments. 4.4.1 Automatic Metrics For automatic evaluation, we mainly choose four kinds of metrics: 1) Embedding scores (Average, 561 Method Embedding Metrics Diversity BLEU Scores Emotion Expression Avg. Ext. Gre. Coh. Dist-1 Dist-2 BLEU-1 BLEU-2 Emo-acc. Emo-word. S2S-Attn 0.497 0.352 0.328 0.582 0.035 0.119 0.0424 0.0073 0.244 0.285 EmoEmb 0.532 0.381 0.356 0.594 0.040 0.133 0.0722 0.0164 0.693 0.436 EmoDS 0.623 0.427 0.403 0.603 0.050 0.174 0.0976 0.0282 0.746 0.527 ECM 0.625 0.433 0.405 0.607 0.052 0.177 0.1023 0.0332 0.753 0.562 CDL-emo (ours) 0.631 0.451 0.435 0.615 0.058 0.193 0.1162 0.0342 0.765 0.583 CDL-con (ours) 0.628 0.441 0.417 0.612 0.055 0.182 0.1059 0.0338 0.758 0.566 CDL-DL (ours) 0.635 0.452 0.431 0.630 0.062 0.217 0.1187 0.0353 0.794 0.615 CDL (ours) 0.642 0.457 0.438 0.635 0.065 0.221 0.1254 0.0370 0.823 0.620 Table 3: Automatic evaluation results for content and emotion measurements. The metrics Average, Extrema, Greedy, Coherence, Emotion-acc and Emotion-word are abbreviated as Avg., Ext., Gre., Coh., Emo-acc. and Emo-word., respectively. Method Like Sad Disgust Angry Happy Overall Con. Emo. Con. Emo. Con. Emo. Con. Emo. Con. Emo. Con. Emo. S2S-Attn 1.295 0.435 1.125 0.120 1.160 0.115 1.255 0.045 1.155 0.305 1.198 0.204 EmoEmb 1.290 0.630 0.990 0.225 1.125 0.295 1.220 0.220 1.275 0.400 1.180 0.354 EmoDS 1.375 0.685 1.210 0.395 1.200 0.340 1.225 0.345 1.260 0.535 1.254 0.460 ECM 1.375 0.690 1.205 0.425 1.205 0.325 1.240 0.385 1.255 0.590 1.256 0.483 CDL 1.395 0.700 1.245 0.565 1.235 0.490 1.250 0.525 1.305 0.630 1.286 0.582 Table 4: Human evaluation results. “Con.” and “Emo.” denote content and emotion, respectively. Greedy, Extrema and Coherence)3 (Liu et al., 2016; Xu et al., 2018); 2) BLEU scores (Papineni et al., 2002) in 0 to 1 scale; 3) Dist-1, Dist-2 (Li et al., 2016a) and 4) Emotion-acc, Emotion-word (Zhou et al., 2018a; Song et al., 2019). Embedding scores and BLEU scores are used to measure the quality of generated responses in terms of content relevance. Whereas, Dist-1 and Dist-2 are used to evaluate the diversity of responses4. Emotion-acc and Emotion-word are utilized to test the emotion expression. Specifically, Emo-acc is the agreement between the ground truth labels and the predicted labels through the TextCNN classifier trained before. Emo-word is the percentage of the generated responses that contain the corresponding emotion words. Since there are no multi-emotion ground truths in the test set, we only calculate the metrics between the ground truth, labelled emotion e, and the generated response given also label e for fair comparison. 4.4.2 Human Evaluation Settings Inspired by Zhou et al. (2018a); Song et al. (2019), a human evaluation is conducted to better analyze the quality of generated responses. First, we randomly sample 200 queries from the test set. For 3We use the pre-trained word embeddings based on Sina Weibo data from https://github.com/Embedding/ Chinese-Word-Vectors. 4We employ a popular NLG evaluation project available at https://github.com/Maluuba/nlg-eval for automatic evaluation. each method except S2S-Attn, they generate six responses for six emotion categories, while S2SAttn generates top 6 responses from beam search decoding for each query. Then, we send the triples of (query, response, emotion) to three human annotators without order, and require them to evaluate each response on both content level and emotion level independently. Content and emotion are measured by a 3-scale rating (0, 1, 2) and a 2-scale rating (0, 1), respectively. Evaluation from the content level assesses whether a response is fluent, coherent and meaningful for the query, and evaluation from the emotion level decides if a response reveals the desired emotion. 4.5 Experimental Results Now we demonstrate our experimental results on both automatic evaluation and human evaluation. 4.5.1 Automatic Evaluation Results The automatic results are shown in Table 3. The top part is the results of all baseline models, and we can see that CDL outperforms the other methods on all metrics (t-test, p-value < 0.05). The improvements of CDL on Coherence, Emotion-acc and Emotion-word are significant, indicating that it can enhance content consistency and emotion expression simultaneously. EmoDS and ECM have similar performance, as both of them use the forward method to pay more attention on the emotion 562 Query Method Response Translated Response S2S-Attn 生活需要改变心态。 Life needs a change of mindset. ECM Like 希望有个好心情。 Hope to have a good mood. Sad 我也是很纠结的。 I am also very tangled. Disgust 你太过分了。 You are too much. 创意源于生活,看你对生 Angry 你怎么还没睡觉? how come you are still up? 活的态度如何。 Happy 哈哈,是啊。 Haha, yes. Creativity comes from life, CDL Like 希望每天都有好心情。 Hope to have a good mood every day. and depends on your Sad 我觉得我很不淡定。 I do not think I am calm. attitude. Disgust 别说废话了。 Stop talking nonsense. Angry 你根本不懂生活! You do not understand life at all! Happy 开心是最重要的。 Happiness is the most important. S2S-Attn 我不做好事。 I do not do good things. ECM Like 是的,喜欢就好。 Yes, it is ok you like it. Sad 是啊,我很无辜的。 Yeah, I am innocent. Disgust 不是不是,不是好人。 No, no, not a good person. Angry 你是什么意思??? What do you mean??? 善良的人还是存在的。 Happy 哈哈,你也是。 Haha, you too. Kind people still exist. CDL Like 你是不是在夸我? Are you praising me? Sad 可惜我不想做。 Unfortunately, I do not want to be. Disgust 听起来像假话。 It sounds like a lie. Angry 我一点也没觉得。 I feel nothing at all. Happy 要对生活充满希望。 Be hopeful about life. S2S-Attn 我也很喜欢。 I like it, too. ECM Like 我也喜欢秋天。 I also like autumn. Sad 我也想念秋天。 I also miss autumn. Disgust 你太过分啦。 You are too much. 我最喜欢的季节就是秋 Angry 你怎么不说我是爱? Why don’t you say that I love it? 天。 Happy 哈哈,我也喜欢。 Haha, I like it too. Autumn is my favorite CDL Like 秋天的天空非常美丽。 The autumn sky is very beautiful. season. Sad 我很想念过去的夏天。 I miss the past summer very much. Disgust 秋天的景色很难看。 The autumn scenery is ugly. Angry 你有病吧? Are you insane? Happy 哈哈,要不要去秋游? Haha, do you want to go to the autumn tour? Table 5: Sample responses generated by S2S-Attn, ECM and CDL (original Chinese and English translation). The colored words are the emotion words corresponding to the given emotion category. factor. S2S-Attn can only generate fluent responses based on semantic mapping, but fail to express diverse responses. The bottom part of Table 3 shows the results of our ablation study. Comparisons among CDL-emo, CDL-con and CDL show the effectiveness of the combined reward for both emotion expression and content consistency. In addition, we can find that with the support of curriculum learning, CDL can achieve better results than CDL-DL. 4.5.2 Human Evaluation Results The results are shown in Table 4. CDL obtains the best performance (t-test, p-value < 0.05) on both emotion expression (0.582) and content coherence (1.286). As we can see, there is no obvious difference between EmoDS and ECM. Due to the insufficient training data of “Anger” (79,611 in queries and 138,198 in responses), S2S-Attn achieves the best content score for it, which is similar to the results of Zhou et al. (2018a). Method (%) 2-1 1-1 0-1 2-0 1-0 0-0 S2S-Attn 10.3 7.2 2.8 36.4 26.5 16.8 EmoEmb 21.8 12.6 7.5 24.6 15.3 18.2 EmoDS 28.7 15.6 4.0 22.7 13.5 15.5 ECM 27.1 12.7 4.5 23.5 15.4 16.8 CDL 32.5 17.6 4.1 17.7 12.8 15.3 Table 6: The percentage of responses in human evaluation of Content-Emotion scores. 2-1 means content score is 2 and emotion score is 1. Results of emotion and content in Table 4 are independent. To better evaluate the overall quality of the generated responses, we present results in Table 6 by considering content and emotion scores simultaneously. 32.5% of the responses generated by CDL are annotated with Emotion score 2 and Content score 1, which shows that CDL is better at producing coherent as well as emotion-rich responses. 563 Agreements to measure the consistency among three annotators are calculated with the Fleiss’ kappa (Fleiss and Cohen, 1973). Fleiss’ kappa for content and emotion is 0.497 and 0.825, indicating “Moderate agreement” and “Substantial agreement”, respectively. 4.6 Case Study Table 5 shows the examples generated by S2S-Attn, ECM and CDL. As can be seen from it, for a given post, there are multiple emotion categories that are appropriate for its response in the conversation. S2S-Attn generates a response with a random emotion, while ECM and CDL can utilize the specific emotion label. Compared with ECM, CDL can generate both coherent and informative responses with any desired emotion. In addition, the emotion can be expressed in either explicit or implicit manner. For example, “你/根本/不懂/生活!(You do not understand life at all!)” express anger when we read this sentence as a whole, while “美丽(beautiful)” or “开心(happy)” are strong emotion words to represent “Like” or “Happy”. 4.7 Further Analysis of CDL Here, we conduct a further analysis to show some characteristics of this task and the effect of CDL. Emotion lexicon size and classification accuracy after pre-training of each category (N(correct prediction) ÷ category size) are listed in Table 7. We can see that the classification accuracy is not totally related to the emotion lexicon size, indicating the emotion expression is partially implicit or explicit. To better illustrate the learning efficiency of CDL, we plot the changes of Emotion-acc on the validation set. As shown in Figure 2, CDL accelerates the learning effectively and consistently outperforms CDL-DL. Figure 2: Comparison of CDL and CDL-DL for Emotion-acc on the validation set. Like Sad Disgust Angry Happy Lex. Size 1,629 294 1,142 30 405 ACC (f) 0.653 0.691 0.609 0.736 0.818 ACC (b) 0.690 0.655 0.602 0.756 0.808 Table 7: Emotion lexicon size and classification accuracy after pre-training of each emotion category. “Lex.”, “ACC(f)” and “ACC(b)” represent lexicon, classification accuracy of forward process and classification accuracy of backward process, respectively. 5 Related Work Responses generated by traditional open-domain dialogue systems are usually safe and generic. To produce diverse and informative responses, researchers tried to either import latent variables for model construction (Zhao et al., 2017; Serban et al., 2017; Shen et al., 2019) or utilize some extra knowledge, e.g., sentence types, personas, emotions, documents and knowledge triples/graphs (Ke et al., 2018; Li et al., 2016b; Zhou et al., 2018a; Meng et al., 2019; Zhou et al., 2018b; Niu et al., 2019). In this paper, we mainly touch on two branches of research: emotional response generation and dual learning in NLP. 5.1 Emotional Response Generation Early studies have proven that dialogue systems with proper emotional expressions and reactions can directly improve user satisfaction (Prendinger and Ishizuka, 2005; Prendinger et al., 2005) and contribute to effective users’ performance (Partala and Surakka, 2004). Polzin and Waibel (2000) and Polzin and Waibel (2000) apply rule-based methods to choose emotional responses from a conversation corpus, but those rules are hard to extend to large corpora. With the advent of deep learning, some researchers utilize neural networks to solve this problem (Ghosh et al., 2017; Hu et al., 2017; Zhou and Wang, 2018; Sun et al., 2018). Besides, the Valence, Arousal, and Dominance (VAD) lexicon (Warriner et al., 2013; Mohammad, 2018) is embedded to the sequence-to-sequence model (Sutskever et al., 2014) to provide extra affective information (Asghar et al., 2018; Zhong et al., 2019). Responses generated by above studies can simply continues the emotion of the query. To generate emotion-controllable responses, Zhou et al. (2018a) address the emotion factor in large-scale conversations, and propose ECM to generate responses based on different given emotions. After that, Colombo et al. (2019) augment ECM with 564 VAD embeddings and modified the loss function and decoding procedure. Song et al. (2019) use lexicon-based attention and a word-based classifier to improve the ability of emotion expression. 5.2 Dual Learning in NLP He et al. (2016) propose Dual Learning (DL) for machine translation first which consider the source to target language translation and target to source language translation as a dual task. After that, Tang et al. (2017) implement a dual framework for the question answering system. Both Zhang et al. (2018) and Cui et al. (2019) use similar idea in dialogue generation task to produce coherent but not safe responses, since they find that a more diverse and specific response usually has a higher probability of being transformed back to the given query. Luo et al. (2019b) and Luo et al. (2019a) exploit DL in unsupervised text style transfer to relieve the need of parallel data. The differences between our method and those in Section 5.1 and Section 5.2 are: (1) We consider the emotion expression and content consistency simultaneously via a DL method. (2) Instead of regarding the query as an emotionless sentence, we utilize the emotion of query, which can help model the emotion shifting and coherence to improve the quality of response. (3) To better model the changes in emotion and content between the query and response, we combine the DL method with curriculum learning, which is known to improve the effectiveness and generalization. 6 Conclusion In this paper, we propose a new framework Curriculum Dual Learning (CDL) for generating emotional responses in a controlled manner. Since existing methods in this field only focus on the emotion expression of target label but fail to consider the emotion of queries, the safe response problem deteriorates and hurts the content consistency. CDL utilizes two kinds of rewards to enhance emotion and content simultaneously via dual learning. Besides, with the support of curriculum learning, it can be more efficient. Experimental results show that CDL can generate fluent, coherent, informative as well as emotional responses. Acknowledgements This work was supported by National Key R&D Program of China (NO. 2017YFE0192900). We sincerely thank the anonymous reviewers for their helpful and valuable suggestions. References Nabiha Asghar, Pascal Poupart, Jesse Hoey, Xin Jiang, and Lili Mou. 2018. Affective neural response generation. In European Conference on Information Retrieval, pages 154–166. Springer. Yoshua Bengio, J´erˆome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48. ACM. Hengyi Cai, Hongshen Chen, Cheng Zhang, Yonghao Song, Xiaofang Zhao, Yangxi Li, Dongsheng Duan, and Dawei Yin. 2020. Learning from easy to complex: Adaptive multi-curricula learning for neural dialogue generation. arXiv preprint arXiv:2003.00639. Pierre Colombo, Wojciech Witon, Ashutosh Modi, James Kennedy, and Mubbasir Kapadia. 2019. Affect-driven dialog generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3734–3743. Shaobo Cui, Rongzhong Lian, Di Jiang, Yuanfeng Song, Siqi Bao, and Yong Jiang. 2019. Dal: Dual adversarial learning for dialogue generation. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 11–20. Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language generation. In Proceedings of the Workshop on Stylistic Variation, pages 94–104. Joseph L Fleiss and Jacob Cohen. 1973. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and psychological measurement, 33(3):613– 619. Sayan Ghosh, Mathieu Chollet, Eugene Laksana, Louis-Philippe Morency, and Stefan Scherer. 2017. Affect-lm: A neural language model for customizable affective text generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 634–642. Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Advances in Neural Information Processing Systems, pages 820–828. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward controlled generation of text. In Proceedings 565 of the 34th International Conference on Machine Learning-Volume 70, pages 1587–1596. JMLR. org. Pei Ke, Jian Guan, Minlie Huang, and Xiaoyan Zhu. 2018. Generating informative responses with controlled sentence function. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1499–1508. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751. Diederick P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In the 3rd International Conference on Learning Representations. Alex M Lamb, Anirudh Goyal Alias Parth Goyal, Ying Zhang, Saizheng Zhang, Aaron C Courville, and Yoshua Bengio. 2016. Professor forcing: A new algorithm for training recurrent networks. In Advances In Neural Information Processing Systems, pages 4601–4609. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119. Jiwei Li, Michel Galley, Chris Brockett, Georgios Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 994–1003. Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. 2016c. Deep reinforcement learning for dialogue generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1192– 1202. Jiwei Li, Will Monroe, Tianlin Shi, S´ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2157–2169. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132. Fuli Luo, Peng Li, Pengcheng Yang, Jie Zhou, Yutong Tan, Baobao Chang, Zhifang Sui, and Xu Sun. 2019a. Towards fine-grained text sentiment transfer. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2013–2022. Fuli Luo, Peng Li, Jie Zhou, Pengcheng Yang, Baobao Chang, Zhifang Sui, and Xu Sun. 2019b. A dual reinforcement learning framework for unsupervised text style transfer. arXiv preprint arXiv:1905.10060. Chuan Meng, Pengjie Ren, Zhumin Chen, Christof Monz, Jun Ma, and Maarten de Rijke. 2019. Refnet: A reference-aware network for background based conversation. arXiv preprint arXiv:1908.06449. Saif Mohammad. 2018. Obtaining reliable human ratings of valence, arousal, and dominance for 20,000 english words. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 174–184. Zheng-Yu Niu, Hua Wu, Haifeng Wang, et al. 2019. Knowledge aware conversation generation with explainable reasoning over augmented graphs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1782–1792. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311–318. Association for Computational Linguistics. Timo Partala and Veikko Surakka. 2004. The effects of affective interventions in human–computer interaction. Interacting with computers, 16(2):295–309. Ramakanth Pasunuru and Mohit Bansal. 2018. Multireward reinforced summarization with saliency and entailment. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 646– 653. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabas Poczos, and Tom Mitchell. 2019. Competence-based curriculum learning for neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1162–1172. Thomas S Polzin and Alexander Waibel. 2000. Emotion-sensitive human-computer interfaces. In ISCA tutorial and research workshop (ITRW) on speech and emotion. 566 Helmut Prendinger and Mitsuru Ishizuka. 2005. The empathic companion: A character-based interface that addresses users’affective states. Applied Artificial Intelligence, 19(3-4):267–285. Helmut Prendinger, Junichiro Mori, and Mitsuru Ishizuka. 2005. Using human physiology to evaluate subtle expressivity of a virtual quizmaster in a mathematical game. International journal of human-computer studies, 62(2):231–245. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Thirty-First AAAI Conference on Artificial Intelligence. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1577–1586. Lei Shen, Yang Feng, and Haolan Zhan. 2019. Modeling semantic relationship in multi-turn conversations with hierarchical latent variables. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5497–5502. Zhenqiao Song, Xiaoqing Zheng, Lu Liu, Mu Xu, and Xuan-Jing Huang. 2019. Generating responses with a specific emotion in dialog. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3685–3695. Xiao Sun, Xinmiao Chen, Zhengmeng Pei, and Fuji Ren. 2018. Emotional human machine conversation generation based on seqgan. In 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia), pages 1–6. IEEE. I Sutskever, O Vinyals, and QV Le. 2014. Sequence to sequence learning with neural networks. Advances in NIPS. Duyu Tang, Nan Duan, Tao Qin, Zhao Yan, and Ming Zhou. 2017. Question answering and question generation as dual tasks. arXiv preprint arXiv:1706.02027. Amy Beth Warriner, Victor Kuperman, and Marc Brysbaert. 2013. Norms of valence, arousal, and dominance for 13,915 english lemmas. Behavior research methods, 45(4):1191–1207. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Xinnuo Xu, Ondˇrej Duˇsek, Ioannis Konstas, and Verena Rieser. 2018. Better conversations by modeling, filtering, and optimizing for coherence and diversity. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3981–3991. Wojciech Zaremba and Ilya Sutskever. 2015. Reinforcement learning neural turing machines-revised. arXiv preprint arXiv:1505.00521. Hainan Zhang, Yanyan Lan, Jiafeng Guo, Jun Xu, and Xueqi Cheng. 2018. Reinforcing coherence for sequence to sequence model in dialogue generation. In IJCAI, pages 4567–4573. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 654–664. Peixiang Zhong, Di Wang, and Chunyan Miao. 2019. An affect-rich neural conversational model with biased attention and weighted cross-entropy loss. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7492–7500. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018a. Emotional chatting machine: Emotional conversation generation with internal and external memory. In Thirty-Second AAAI Conference on Artificial Intelligence. Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018b. Commonsense knowledge aware conversation generation with graph attention. In IJCAI, pages 4623–4629. Xianda Zhou and William Yang Wang. 2018. Mojitalk: Generating emotional responses at scale. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1128–1137.
2020
52
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5860–5870 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5860 An Effective Transition-based Model for Discontinuous NER Xiang Dai1,2 Sarvnaz Karimi1 Ben Hachey3 Cecile Paris1 1CSIRO Data61, Sydney, Australia 2University of Sydney, Sydney, Australia 3Harrison.ai, Sydney, Australia {dai.dai,sarvnaz.karimi,cecile.paris}@csiro.au [email protected] Abstract Unlike widely used Named Entity Recognition (NER) data sets in generic domains, biomedical NER data sets often contain mentions consisting of discontinuous spans. Conventional sequence tagging techniques encode Markov assumptions that are efficient but preclude recovery of these mentions. We propose a simple, effective transition-based model with generic neural encoding for discontinuous NER. Through extensive experiments on three biomedical data sets, we show that our model can effectively recognize discontinuous mentions without sacrificing the accuracy on continuous mentions. 1 Introduction Named Entity Recognition (NER) is a critical component of biomedical natural language processing applications. In pharmacovigilance, it can be used to identify adverse drug events in consumer reviews in online medication forums, alerting medication developers, regulators and clinicians (Leaman et al., 2010; Sarker et al., 2015; Karimi et al., 2015b). In clinical settings, NER can be used to extract and summarize key information from electronic medical records such as conditions hidden in unstructured doctors’ notes (Feblowitz et al., 2011; Wang et al., 2018b). These applications require identification of complex mentions not seen in generic domains (Dai, 2018). Widely used sequence tagging techniques (flat model) encode two assumptions that do not always hold: (1) mentions do not nest or overlap, therefore each token can belong to at most one mention; and, (2) mentions comprise continuous sequences of tokens. Nested entity recognition addresses violations of the first assumption (Lu and Roth, 2015; Katiyar and Cardie, 2018; Sohrab and Miwa, 2018; Ringland et al., 2019). However, the violation of The left atrium is mildly dilated . E1 E1 have much muscle pain and fatigue . E2 E3 E3 Figure 1: Examples involving discontinuous mentions, taken from the ShARe 13 (Pradhan et al., 2013) and CADEC (Karimi et al., 2015a) data sets, respectively. The first example contains a discontinuous mention ‘left atrium dilated’, the second example contains two mentions that overlap: ‘muscle pain’ and ‘muscle fatigue’ (discontinuous). the second assumption is comparatively less studied and requires handling discontinuous mentions (see examples in Figure 1). In contrast to continuous mentions which are often short spans of text, discontinuous mentions consist of components that are separated by intervals. Recognizing discontinuous mentions is particularly challenging as exhaustive enumeration of possible mentions, including discontinuous and overlapping spans, is exponential in sentence length. Existing approaches for discontinuous NER either suffer from high time complexity (McDonald et al., 2005) or ambiguity in translating intermediate representations into mentions (Tang et al., 2013a; MetkeJimenez and Karimi, 2016; Muis and Lu, 2016). In addition, current art uses traditional approaches that rely on manually designed features, which are tailored to recognize specific entity types. Also, these features usually do not generalize well in different genres (Leaman et al., 2015). Motivations The main motivation for recognizing discontinuous mentions is that they usually represent compositional concepts that differ from concepts represented by individual components. For example, the mention ‘left atrium dilated’ in the first example of Figure 1 describes a disorder which has its own CUI (Concept Unique Identi5861 fier) in UMLS (Unified Medical Language System), whereas both ‘left atrium’ and ‘dilated’ also have their own CUIs. We argue that, in downstream applications such as pharmacovigilance and summarization, recognizing these discontinuous mentions that refer to disorders or symptoms is more useful than recognizing separate components which may refer to body locations or general feelings. Another important characteristic of discontinuous mentions is that they usually overlap. That is, several mentions may share components that refer to the same body location (e.g., ‘muscle’ in ‘muscle pain and fatigue’), or the same feeling (e.g., ‘Pain’ in ‘Pain in knee and foot’). Separating these overlapping mentions rather than identifying them as a single mention is important for downstream tasks, such as entity linking where the assumption is that the input mention refers to one entity (Shen et al., 2015). Contributions We propose an end-to-end transition-based model with generic neural encoding that allows us to leverage specialized actions and attention mechanism to determine whether a span is the component of a discontinuous mention or not.1 We evaluate our model on three biomedical data sets with a substantial number of discontinuous mentions and demonstrate that our model can effectively recognize discontinuous mentions without sacrificing the accuracy on continuous mentions. 2 Prior Work Existing methods on discontinuous NER can be mainly categorized into two categories: token level approach, based on sequence tagging techniques, and sentence level approach, where a combination of mentions within a sentence is jointly predicted (Dai, 2018). Token level approach Sequence tagging model takes a sequence of tokens as input and outputs a tag for each token, composed of a position indicator (e.g., BIO schema) and an entity type. The vanilla BIO schema cannot effectively represent discontinuous, overlapping mentions, therefore, some studies overcome this limitation via expanding the BIO tag set (Tang et al., 2013a; Metke-Jimenez and Karimi, 2016; Dai et al., 2017; Tang et al., 2018). In addition to BIO indicators, four new position indicators are introduced in (Metke-Jimenez and 1Code available at GitHub: https://bit.ly/2XazEAO Karimi, 2016) to represent discontinuous mentions that may overlap: • BH: Beginning of Head, defined as the components shared by multiple mentions; • IH: Intermediate of Head; • BD: Beginning of Discontinuous body, defined as the exclusive components of a discontinuous mention; and • ID: Intermediate of Discontinuous body. Sentence level approach Instead of predicting whether each token belongs to an entity mention and its role in the mention, sentence level approach predicts a combination of mentions within a sentence. A hypergraph, proposed by Lu and Roth (2015) and extended in (Muis and Lu, 2016), can compactly represent discontinuous and overlapping mentions in one sentence. A sub-hypergraph of the complete hypergraph can, therefore, be used to represent a combination of mentions in the sentence. For the token at each position, there can be six different node types: • A: mentions that start from the current token or a future token; • E: mentions that start from the current token; • T: mentions of a certain entity type that start from the current token; • B: mentions that contain the current token; • O: mentions that have an interval at the current token; • X: mentions that end at the current token. Using this representation, a single entity mention can be represented as a path from node A to node X, incorporating at least one node of type B. Note that both token level and sentence level approaches predict first an intermediate representation of mentions (e.g., a sequence of tags in (MetkeJimenez and Karimi, 2016) and a sub-hypergraph in (Muis and Lu, 2016)), which are then decoded into the final mentions. During the final decoding stage, both models suffer from some level of ambiguity. Taking the sequence tagging model using BIO variant schema as an example, even if the model can correctly predict the gold sequence of tags for the example sentence ‘muscle pain and 5862 fatigue’ (BH I O BD), it is still not clear whether the token ‘muscle’ forms a mention by itself, because the same sentence containing three mentions (‘muscle’, ‘muscle pain’ and ‘muscle fatigue’) can be encoded using the same gold sequence of tags. We refer to a survey by (Dai, 2018) for more discussions on these models, and (Muis and Lu, 2016) for a theoretical analysis of ambiguity of these models. Similar to prior work, our proposed transitionbased model uses an intermediate representation (i.e., a sequence of actions). However, it does not suffer from this ambiguity issue. That is, the output sequence of actions can always be unambiguously decoded into mention outputs. The other two methods that focus on the discontinuous NER problem in literature are described in (McDonald et al., 2005; Wang and Lu, 2019). McDonald et al. (2005) solve the NER task as a structured multi-label classification problem. Instead of starting and ending indices, they represent each entity mention using the set of token positions that belong to the mention. This representation is flexible, as it allows mentions consisting of discontinuous tokens and does not require mentions to exclude each other. However, this method suffers from high time complexity. Tang et al. (2018) compare this representation with BIO variant schema proposed in (Metke-Jimenez and Karimi, 2016), and found that they achieve competitive F1 scores, although the latter method is more efficient. A twostage approach that first detects all components and then combines components into discontinuous mentions based on a classifier’s decision was explored in recent work by Wang and Lu (2019). Discontinuous NER vs. Nested NER Although discontinuous mentions may overlap, we discriminate this overlapping from the one in nested NER. That is, if one mention is completely contained by the other, we call mentions involved nested entity mentions. In contrast, overlapping in discontinuous NER is usually that two mentions overlap, but no one is completely contained by the other. Most of existing nested NER models are built to tackle the complete containing structure (Finkel and Manning, 2009; Lu and Roth, 2015), and they cannot be directly used to identify overlapping mentions studied in this paper, nor mention the discontinuous mentions. However, we note that there is a possible perspective to solve discontinuous NER task by adding fine-grained entity types into the schema. Taking the second sentence in Figure 1 have much muscle pain and fatigue . Adverse drug event General Feeling General Feeling Body Location Figure 2: Examples involving Nested mentions. as an example, we can add two new entity types: ‘Body Location’ and ’General Feeling’, and then annotate ‘muscle pain and fatigue’ as a ‘Adverse drug event’ mention, ‘muscle’ as a ‘Body Location’ mention, and ‘pain’ and ‘fatigue’ as ‘General Feeling’ mentions (Figure 2). Then the discontinuous NER task can be converted into a Nested NER task. 3 Model Transition-based models, due to their high efficiency, are widely used for NLP tasks, such as parsing and entity recognition (Chen and Manning, 2014; Lample et al., 2016; Lou et al., 2017; Wang et al., 2018a). The model we propose for discontinuous NER is based on the shift-reduce parser (Watanabe and Sumita, 2015; Lample et al., 2016) that employs a stack to store partially processed spans and a buffer to store unprocessed tokens. The learning problem is then framed as: given the state of the parser, predict an action which is applied to change the state of the parser. This process is repeated until the parser reaches the end state (i.e., the stack and buffer are both empty). The main difference between our model and the ones in (Watanabe and Sumita, 2015; Lample et al., 2016) is the set of transition actions. Watanabe and Sumita (2015) use SHIFT, REDUCE, UNARY, FINISH, and IDEA for the constituent parsing system. Lample et al. (2016) use SHIFT, REDUCE, OUT for the flat NER system. Inspired by these models, we design a set of actions specifically for recognizing discontinuous and overlapping structure. There are in total six actions in our model: • SHIFT moves the first token from the buffer to the stack; it implies this token is part of an entity mention. • OUT pops the first token of the buffer, indicating it does not belong to any mention. • COMPLETE pops the top span of the stack, outputting it as an entity mention. If we are interested in multiple entity types, we can extend this action to COMPLETE-y which labels the mention with entity type y. 5863 have much muscle pain and fatigue Buffer Stack Predicted Action OUT much muscle pain and fatigue OUT muscle pain and fatigue SHIFT pain and fatigue muscle SHIFT and fatigue muscle pain LEFTREDUCE and fatigue muscle pain COMPLETE and fatigue OUT fatigue muscle SHIFT muscle REDUCE muscle muscle muscle fatigue COMPLETE fatigue Figure 3: An example sequence of transitions. Given the states of stack and buffer (blue highlighted), as well as the previous actions, predict the next action (i.e., LEFT-REDUCE) which is then applied to change the states of stack and buffer. • REDUCE pops the top two spans s0 and s1 from the stack and concatenates them as a new span which is then pushed back to the stack. • LEFT-REDUCE is similar to the REDUCE action, except that the span s1 is kept in the stack. This action indicates the span s1 is involved in multiple mentions. In other words, several mentions share s1 which could be a single token or several tokens. • RIGHT-REDUCE is the same as LEFTREDUCE, except that s0 is kept in the stack. Figure 3 shows an example about how the parser recognizes entity mentions from a sentence. Note that, given one parser state, not all types of actions are valid. For example, if the stack does not contain any span, only SHIFT and OUT actions are valid because all other actions involve popping spans from the stack. We employ hard constraints that we only select the most likely action from valid actions. 3.1 Representation of the Parser State Given a sequence of N tokens, we first run a bidirectional LSTM (Graves et al., 2013) to derive the contextual representation of each token. Specifically, for the i-th token in the sequence, its representation can be denoted as: ˜ci = h−−−−→ LSTM(t0, . . . , ti); ←−−−− LSTM(ti, . . . , tN−1) i , where ti is the concatenation of the embeddings for the i-th token, its character level representation learned using a CNN network (Ma and Hovy, 2016). Pretrained contextual word representations have shown its usefulness on improving various NLP tasks. Here, we can also concatenate pretrained contextual word representations using ELMo (Peters et al., 2018) with ˜ci, resulting in: ci = [˜ci; ELMoi] , (1) where ELMoi is the output representation of pretrained ELMo models (frozen) for the i-th token. These token representations c are directly used to represent tokens in the buffer. We also explore a variant that uses the output of pretrained BERT (Devlin et al., 2019) as token representations c, and fine-tune the BERT model. However, this finetuning approach with BERT does not achieve as good performance as feature extraction approach with ELMo (Peters et al., 2019). Following the work in (Dyer et al., 2015), we use Stack-LSTM to represent spans in the stack. That is, if a token is moved from the buffer to the stack, its representation is learned using: s0 = Stack-LSTM(sD . . . s1; cSHIFT), where D is the number of spans in the stack. Once REDUCE related actions are applied, we use a multi-layer perceptron to learn the representation of the concatenated span. For example, the REDUCE action takes the representation of the top two spans in the stack: s0 and s1, and produces a new span representation: ˜s = WT [s0; s1] + b, where W and b denote the parameters for the composition function. The new span representation ˜s is pushed back to the stack to replace the original two spans: s0 and s1. 3.2 Capturing Discontinuous Dependencies We hypothesize that the interactions between spans in the stack and tokens in the buffer are important factors in recognizing discontinuous mentions. Considering the example in Figure 3, a span in the 5864 stack (e.g., ‘muscle’) may need to combine with a future token in the buffer (e.g., ‘fatigue’). To capture this interaction, we use multiplicative attention (Luong et al., 2015) to let the span in the stack si learn which token in the buffer to attend, and thus a weighted sum of the representation of tokens in the buffer B: sa i = softmax(sT i Wa i B)B. (2) We use distinct Wa i for si separately. 3.3 Selecting an Action Finally, we build the parser representation as the concatenation of the representation of top three spans from the stack (s0, s1, s2) and its attended representation (sa 0, sa 1, sa 2), as well as the representation of the previous action a, which is learned using a simple unidirectional LSTM. If there are less than 3 spans in the stack or no previous action, we use randomly initialized vectors sempty or aempty to replace the corresponding vector. This parser representation is used as input for the final softmax prediction layer to select the next action. 4 Data sets Although some text annotation tools, such as BRAT (Stenetorp et al., 2012), allow discontinuous annotations, corpora annotated with a large number of discontinuous mentions are still rare. We use three data sets from the biomedical domain: CADEC (Karimi et al., 2015a), ShARe 13 (Pradhan et al., 2013) and ShARe 14 (Mowery et al., 2014). Around 10% of mentions in these three data sets are discontinuous. The descriptive statistics are listed in Table 1. CADEC is sourced from AskaPatient2, a forum where patients can discuss their experiences with medications. The entity types in CADEC include drug, Adverse Drug Event (ADE), disease and symptom. We only use ADE annotations because only the ADEs involve discontinuous annotations. This also allows us to compare our results directly against previously reported results (Metke-Jimenez and Karimi, 2016; Tang et al., 2018). ShARe 13 and 14 focus on the identification of disorder mentions in clinical notes, including discharge summaries, electrocardiogram, echocardiogram, and radiology reports (Johnson et al., 2016). A disorder mention is defined as any span of text which can be 2https://www.askapatient.com/ CADEC ShARe 13 ShARe 14 Text type online posts clinical notes clinical notes Entity type ADE Disorder Disorder # Documents 1,250 298 433 # Tokens 121K 264K 494K # Sentences 7,597 18,767 34,618 # Mentions 6,318 11,161 19,131 # Disc.M 675 (10.6) 1,090 (9.7) 1,710 (8.9) Avg mention L. 2.7 1.8 1.7 Avg Disc.M L. 3.5 2.6 2.5 Avg interval L. 3.3 3.0 3.2 Discontinuous Mentions 2 components 650 (95.7) 1,026 (94.3) 1,574 (95.3) 3 components 27 ( 3.9) 62 ( 5.6) 76 ( 4.6) 4 components 2 ( 0.2) 0 ( 0.0) 0 ( 0.0) No overlap 82 (12.0) 582 (53.4) 820 (49.6) Overlap at left 351 (51.6) 376 (34.5) 616 (37.3) Overlap at right 152 (22.3) 102 ( 9.3) 170 (10.3) Multiple overlaps 94 (13.8) 28 ( 2.5) 44 ( 2.6) Continuous Mentions Overlap 326 ( 5.7) 157 ( 1.5) 228 ( 1.3) Table 1: The descriptive statistics of the data sets. ADE: adverse drug events; Disc.M: discontinuous mentions; Disc.M L.: discontinuous mention length, where intervals are not counted. Numbers in parentheses are the percentage of each category. mapped to a concept in the disorder semantic group of SNOMED-CT (Cornet and de Keizer, 2008). Although these three data sets share similar field (the subject matter of the content being discussed), the tenor (the participants in the discourse, their relationships to each other, and their purposes) of CADEC is very different from the ShARe data sets (Dai et al., 2019). In general, laymen (i.e., in CADEC) tend to use idioms to describe their feelings, whereas professional practitioners (i.e., in ShARe) tend to use compact terms for efficient communications. This also results in different features of discontinuous mentions between these data sets, which we will discuss further in § 7. Experimental Setup As CADEC does not have an official train-test split, we follow Metke-Jimenez and Karimi (2016) and randomly assign 70% of the posts as the training set, 15% as the development set, and the remaining posts as the test set. 3 The train-test splits of ShARe 13 and 14 are both from their corresponding shared task settings, except that we randomly select 10% of documents from each training set as the development set. Micro 3These splits can be downloaded from https://bit.ly/2XazEAO. 5865 average strict match F1 score is used to evaluate the effectiveness of the model. The trained model which is most effective on the development set, measured using the F1 score, is used to evaluate the test set. 5 Baseline Models We choose one flat NER model which is strong at recognizing continuous mentions, and two discontinuous NER models as our baseline models: Flat model To train the flat model on our data sets, we use an off-the-shelf framework: Flair (Akbik et al., 2018), which achieves the state-of-the-art performance on CoNLL 03 data set. Recall that the flat model cannot be directly applied to data sets containing discontinuous mentions. Following the practice in (Stanovsky et al., 2017), we replace the discontinuous mention with the shortest span that fully covers it, and merge overlapping mentions into a single mention that covers both. Note that, different from (Stanovsky et al., 2017), we apply these changes only on the training set, but not on the development set and the test set. BIO extension model The original implementation in (Metke-Jimenez and Karimi, 2016) used a CRF model with manually designed features. We report their results on CADEC in Table 2 and reimplement a BiLSTM-CRF-ELMo model using their tag schema (denoted as ‘BIO Extension’ in Table 2). Graph-based model The original paper of (Muis and Lu, 2016) only reported the evaluation results on sentences which contain at least one discontinuous mention. We use their implementation to train the model and report evaluation results on the whole test set (denoted as ‘Graph’ in Table 2). We argue that it is important to see how a discontinuous NER model works not only on the discontinuous mentions but also on all the mentions, especially since, in real data sets, the ratio of discontinuous mentions cannot be made a priori. We do not choose the model proposed in (Wang and Lu, 2019) as the baseline model, because it is based on a strong assumption about the ratio of discontinuous mentions. Wang and Lu (2019) train and evaluate their model on sentences that contain at least one discontinuous mention. Our early experiments show that the effectiveness of their model strongly depends on this assumption. In contrast, we train and evaluate our model in a more practical setting where the number of continuous mentions is much larger than the one of discontinuous mentions. 6 Experimental Results When evaluated on the whole test set, our model outperforms three baseline models, as well as over previous reported results in the literature, in terms of recall and F1 scores (Table 2). The graph-based model achieves highest precision, but with substantially lower recall, therefore obtaining lowest F1 scores. In contrast, our model improves recall over flat and BIO extension models as well as previously reported results, without sacrificing precision. This results in more balanced precision and recall. Improved recall is especially encouraging for our motivating pharmacovigilance and medical record summarization applications, where recall is at least as important as precision. Effectiveness on recognizing discontinuous mentions Recall that only 10% of mentions in these three data sets are discontinuous. To evaluate the effectiveness of our proposed model on recognizing discontinuous mentions, we follow the evaluation approach in (Muis and Lu, 2016) where we construct a subset of test set where only sentences with at least one discontinuous mention are included (Left part of Table 3). We also report the evaluation results when only discontinuous mentions are considered (Right part of Table 3). Note that sentences in the former setting usually contain continuous mentions as well, including those involved in overlapping structure (e.g., ‘muscle pain’ in the sentence ‘muscle pain and fatigue’). Therefore, the flat model, which cannot predict any discontinuous mentions, still achieves 38% F1 on average when evaluated on these sentences with at least one discontinuous mention, but 0% F1 when evaluated on discontinuous mentions only. Our model again achieves the highest F1 and recall in all three data sets under both settings. The comparison between these two evaluation results also shows the necessity of comprehensive evaluation settings. The BIO E. model outperforms the graph-based model in terms of F1 score on CADEC, when evaluated on sentences with discontinuous mentions. However, it achieves only 1.8 F1 when evaluated on discontinuous mentions only. The main reason is that most of discontinuous mentions in CADEC are involved in overlapping 5866 CADEC ShARe 13 ShARe 14 Model P R F P R F P R F (Metke-Jimenez and Karimi, 2016) 64.4 56.5 60.2 – – – – – – (Tang et al., 2018) 67.8 64.9 66.3 – – – – – – (Tang et al., 2013b) – – – 80.0 70.6 75.0 – – – Flat 65.3 58.5 61.8 78.5 66.6 72.0 76.2 76.7 76.5 BIO Extension 68.7 66.1 67.4 77.0 72.9 74.9 74.9 78.5 76.6 Graph 72.1 48.4 58.0 83.9 60.4 70.3 79.1 70.7 74.7 Ours 68.9 69.0 69.0 80.5 75.0 77.7 78.1 81.2 79.6 Table 2: Evaluation results on the whole test set in terms of precision, recall and F1 score. The original ShARe 14 task focuses on template filling of disorder attributes: that is, given a disorder mention, recognize the attribute from its context. In this work, we use its mention annotations and frame the task as a discontinuous NER task. Sentences with discontinuous mentions Discontinuous mentions only CADEC ShARe 13 ShARe 14 CADEC ShARe 13 ShARe 14 Model P R F P R F P R F P R F P R F P R F Flat 50.2 36.7 42.4 43.5 28.1 34.2 41.5 31.9 36.0 0 0 0 0 0 0 0 0 0 BIO E. 63.8 52.0 57.3 51.8 39.5 44.8 37.5 38.4 37.9 5.8 1.0 1.8 39.7 12.3 18.8 8.8 4.5 6.0 Graph 69.5 43.2 53.3 82.3 47.4 60.2 60.0 52.8 56.2 60.8 14.8 23.9 78.4 36.6 50.0 42.7 39.5 41.1 Ours 66.5 64.3 65.4 70.5 56.8 62.9 61.9 64.5 63.1 41.2 35.1 37.9 78.5 39.4 52.5 56.1 43.8 49.2 Table 3: Evaluation results on sentences that contain at least one discontinuous mention (left part) and on discontinuous mentions only (right part). structure (88%, cf. Table 1), and the BIO E. model is better than the graph-based model at recognizing these continuous mentions. On ShARe 13 and 14, where the portion of discontinuous mentions involved in overlapping is much less than on CADEC, the graph-based model clearly outperforms BIO E. model in both evaluation settings. 7 Analysis We start our analysis from characterizing discontinuous mentions from the three data sets. Then we measure the behaviors of our model and two discontinuous NER models on the development sets based on characteristics identified and attempt to draw conclusions from these measurements. 7.1 Characteristics of Discontinuous Mentions Recall that discontinuous mentions usually represent compositional concepts that consist of multiple components. Therefore, discontinuous mentions are usually longer than continuous mentions (Table 1). In addition, intervals between components make the total length of span involved even longer. Previous work shows that flat NER performance degrades when applied on long mentions (Augenstein et al., 2017; Xu et al., 2017). Another characteristic of discontinuous mentions is that they usually overlap (cf. § 1). From this perspective, we can categorize discontinuous mentions into four categories: • No overlap: in such cases, the discontinuous mention can be intervened by severity indicators (e.g., ‘is mildly’ in sentence ‘left atrium is mildly dilated’), preposition (e.g., ‘on my’ in sentence ‘...rough on my stomach...’) and so on. This category accounts for half of discontinuous mentions in the ShARe data sets but only 12% in CADEC (Table 1). • Left overlap: the discontinuous mention shares one component with other mentions, and the shared component is at the beginning of the discontinuous mention. This is usually accompanied with coordination structure (e.g., the shared component ‘muscle’ in ‘muscle pain and fatigue’). Conjunctions (e.g., ‘and’, ‘or’) are clear indicators of the coordination structure. However, clinical notes are usually written by practitioners under time pressure. They often use commas or slashes rather than conjunctions. This category accounts for more than half of discontinuous mentions in CADEC and one third in ShARe. • Right overlap: similar to left overlap, although the shared component is at the end. For ex5867 1 (280) 2 (307) 3 (136) 4 (69) >=5 (106) Mention length 20 30 40 50 60 70 80 90 Recall BIO E. Graph Ours (a) CADEC 1 (292) 2 (238) 3 (109) 4 (23) >=5 (7) Mention length 50 55 60 65 70 75 80 85 90 Recall BIO E. Graph Ours (b) ShARe 13 1 (352) 2 (261) 3 (115) 4 (26) >=5 (17) Mention length 30 40 50 60 70 80 90 Recall BIO E. Graph Ours (c) ShARe 14 0 (804) 1 (8) 2 (42) 3 (14) >=4 (30) Intervals length 0 20 40 60 80 Recall BIO E. Graph Ours (d) CADEC 0 (598) 1 (15) 2 (26) 3 (12) >=4 (18) Intervals length 0 20 40 60 80 Recall BIO E. Graph Ours (e) ShARe 13 0 (691) 1 (10) 2 (33) 3 (20) >=4 (17) Intervals length 0 20 40 60 80 Recall BIO E. Graph Ours (f) ShARe 14 Figure 4: The impact of mention length and interval length on recall. Mentions with interval length of zero are continuous mentions. Numbers in parentheses are the number of gold mentions. ample, ‘hip/leg/foot pain’ contains three mentions that share ‘pain’. • Multi-overlap: the discontinuous mention shares multiple components with the others, which usually forms crossing compositions. For example, the sentence ‘Joint and Muscle Pain / Stiffness’ contains four mentions: ‘Joint Pain’, ‘Joint Stiffness’, ‘Muscle Stiffness’ and ‘Muscle Pain’, where each discontinuous mention share two components with the others. 7.2 Impact of Overlapping Structure Previous study shows that the intervals between components can be problematic for coordination boundary detection (Ficler and Goldberg, 2016). Conversely, we want to observe whether the overlapping structure may help or hinder discontinuous entity recognition. We categorize discontinuous mentions into different subsets, described in § 7.1, and measure the effectiveness of different discontinuous NER models on each category. From Table 4, we find that our model achieves better results on discontinuous mentions belonging to ‘No overlap’ category on ShARe 13 and 14, and ‘Left overlap’ category on CADEC and ShARe 14. Note that ‘No overlap’ category accounts for half of discontinuous mentions in ShARe 13 and 14, whereas ‘Left overlap’ accounts for half in CADEC (Table 1). Graph-based model achieves better results on ‘Right overlap’ category. On the ‘Multioverlap’ category, no models is effective, which emphasizes the challenges of dealing with this syntactic phenomena. We note, however, the portion of discontinuous mentions belonging to this category is very small in all three data sets. Although our model achieves better results on ‘No overlap’ category on ShARe 13 and 14, it does not predict correctly any discontinuous mention belonging to this category on CADEC. The ineffectiveness of our model, as well as other discontinuous NER models, on CADEC ‘No overlap’ category can be attributed to two reasons: 1) the number of discontinuous mentions belonging to this category in CADEC is small (around 12%), rending the learning process more difficult. 2) the gold annotations belonging to this category are inconsistent from a linguistic perspective. For example, severity indicators are annotated as the interval of the discontinuous mention sometimes, but not often. Note that this may be reasonable from a medical perspective, as some symptoms are roughly grouped together no matter their severity, whereas some symptoms are linked to different concepts based on their severity. 7.3 Impact of Mention and Interval Length We conduct experiments to measure the ability of different models on recalling mentions of different lengths, and to observe the impact of interval lengths. We found that the recall of all models decreases with the increase of mention length in general (Figure 4 (a – c)), which is similar to previous observations in the literature on flat men5868 CADEC ShARe 13 ShARe 14 Model # F # F # F No O BIO E. 9 0.0 41 7.5 39 0.0 Graph 0.0 32.1 45.2 Ours 0.0 36.1 57.1 Left O BIO E. 54 6.0 11 25.0 30 15.7 Graph 9.2 45.5 37.7 Ours 28.6 33.3 49.2 Right O BIO E. 16 0.0 19 0.0 5 0.0 Graph 45.2 21.4 0.0 Ours 29.3 13.3 0.0 Multi O BIO E. 15 0.0 0 – 6 0.0 Graph 0.0 – 0.0 Ours 0.0 – 0.0 Table 4: Evaluation results on different categories of discontinuous mentions. ‘#’ columns show the number of gold discontinuous mentions in development set of each category. O: overlap. tions. However, the impact of interval length is not straightforward. Mentions with very short interval lengths are as difficult as those with very long interval lengths to be recognized (Figure 4 (d – f)). On CADEC, discontinuous mentions with interval length of 2 are easiest to be recognized (Figure 4 (d)), whereas those with interval length of 3 are easiest on ShARe 13 and 14. We hypothesize this also relates to annotation inconsistency, because very short intervals may be overlooked by annotators. In terms of model comparison, our model achieves highest recall in most settings. This demonstrates our model is effective to recognize both continuous and discontinuous mentions with various lengths. In contrast, the BIO E. model is only strong at recalling continuous mentions (outperforming the graph-based model), but fails on discontinuous mentions (interval lengths > 0). 7.4 Example Predictions We find that previous models often fail to identify discontinuous mentions that involve long and overlapping spans. For example, the sentence ‘Severe joint pain in the shoulders and knees.’ contains two mentions: ‘Severe joint pain in the shoulders’ and ‘Severe joint pain in the knees’. Graph-based model does not identify any mention from this sentence, resulting in a low recall. The BIO extension model predicts most of these tags (8 out of 9) correctly, but fails to decode into correct mentions (predict ‘Severe joint pain in the’, resulting in a false positive, while it misses ‘Severe joint pain in the shoulders’). In contrast, our model correctly identifies both of these two mentions. No model can fully recognize mentions which form crossing compositions. For example, the sentence ‘Joint and Muscle Pain / Stiffness’ contains four mentions: ‘Joint Pain’, ‘Joint Stiffness’, ‘Muscle Stiffness’ and ‘Muscle Pain’, all of which share multiple components with the others. Our model correctly predicts ‘Joint Pain’ and ‘Muscle Pain’, but it mistakenly predicts ‘Stiffness’ itself as a mention. 8 Summary We propose a simple, effective transition-based model that can recognize discontinuous mentions without sacrificing the accuracy on continuous mentions. We evaluate our model on three biomedical data sets with a substantial number of discontinuous mentions. Comparing against two existing discontinuous NER models, our model is more effective, especially in terms of recall. Acknowledgments We would like to thank Danielle Mowery for helping us to obtain the ShARe data sets. We also thank anonymous reviewers for their insightful comments. Xiang Dai is supported by Sydney University’s Engineering and Information Technologies Research Scholarship as well as CSIRO’s Data61 top up scholarship. References Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In COLING, pages 1638–1649, Santa Fe, New Mexico. Isabelle Augenstein, Mrinal Das, Sebastian Riedel, Lakshmi Vikraman, and Andrew McCallum. 2017. SemEval 2017 task 10: ScienceIE - extracting keyphrases and relations from scientific publications. In SemEval, pages 546–555, Vancouver, Canada. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In EMNLP, pages 740–750, Doha, Qatar. Ronald Cornet and Nicolette de Keizer. 2008. Forty years of SNOMED: a literature review. In BMC Med Inform Decis Mak, volume 8, page S2. Xiang Dai. 2018. Recognizing complex entity mentions: A review and future directions. In ACL@SRW, pages 37–44, Melbourne, Australia. Xiang Dai, Sarvnaz Karimi, Ben Hachey, and Cecile Paris. 2019. Using similarity measures to select pretraining data for ner. In NAACL, pages 1460–1470, Minneapolis, Minnesota. 5869 Xiang Dai, Sarvnaz Karimi, and C˙ecile Paris. 2017. Medication and adverse event extraction from noisy text. In ALTA, pages 79–87, Brisbane, Australia. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL, pages 4171–4186, Minneapolis, Minnesota. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In ACL-IJCNLP, pages 334–343, Beijing, China. Joshua C Feblowitz, Adam Wright, Hardeep Singh, Lipika Samal, and Dean F Sittig. 2011. Summarization of clinical information: a conceptual model. J Biomed Inform, 44(4):688–699. Jessica Ficler and Yoav Goldberg. 2016. A neural network for coordination boundary prediction. In EMNLP, pages 23–32, Austin, Texas. Jenny Rose Finkel and Christopher Manning. 2009. Nested named entity recognition. In EMNLP, pages 141–150, Singapore. Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. 2013. Speech recognition with deep recurrent neural networks. In ICASSP, pages 6645–6649. Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. MIMIC-III, A freely accessible critical care database. Sci. Data, 3:160035. Sarvnaz Karimi, Alejandro Metke-Jimenez, Madonna Kemp, and Chen Wang. 2015a. CADEC: A corpus of adverse drug event annotations. J Biomed Inform, 55:73–81. Sarvnaz Karimi, Chen Wang, Alejandro MetkeJimenez, Raj Gaire, and C´ecile Paris. 2015b. Text and data mining techniques in adverse drug reaction detection. ACM Comput. Surv., 47(4):56:1–56:39. Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In NAACL, pages 861– 871, New Orleans, Louisiana. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL, pages 260–270, San Diego, California. Robert Leaman, Ritu Khare, and Zhiyong Lu. 2015. Challenges in clinical natural language processing for automated disorder normalization. J Biomed Inform, 57:28–37. Robert Leaman, Laura Wojtulewicz, Ryan Sullivan, Annie Skariah, Jian Yang, and Graciela Gonzalez. 2010. Towards internet-age pharmacovigilance: Extracting adverse drug reactions from user posts in health-related social networks. In BioNLP, pages 117–125, Uppsala, Sweden. Yinxia Lou, Yue Zhang, Tao Qian, Fei Li, Shufeng Xiong, and Donghong Ji. 2017. A transition-based joint model for disease named entity recognition and normalization. Bioinformatics, 33(15):2363–2371. Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In EMNLP, pages 857–867, Lisbon, Portugal. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In EMNLP, pages 1412– 1421, Lisbon, Portugal. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In ACL, pages 1064–1074, Berlin, Germany. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Flexible text segmentation with structured multilabel classification. In EMNLP, pages 987–994, Vancouver, British Columbia, Canada. Alejandro Metke-Jimenez and Sarvnaz Karimi. 2016. Concept identification and normalisation for adverse drug event discovery in medical forums. In BMDID@ISWC, Kobe, Japan. Danielle L Mowery, Sumithra Velupillai, Brett R South, Lee Christensen, David Martinez, Liadh Kelly, Lorraine Goeuriot, Noemie Elhadad, Sameer Pradhan, Guergana Savova, and Wendy W Chapman. 2014. Task 2: ShARe/CLEF ehealth evaluation lab 2014. In CLEF. Aldrian Obaja Muis and Wei Lu. 2016. Learning to recognize discontiguous entities. In EMNLP, pages 75–84, Austin, Texas. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL, pages 2227–2237, New Orleans, Louisiana. Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. In RepL4NLP@ACL, pages 7–14, Florence, Italy. Sameer Pradhan, Noemie Elhadad, Brett R South, David Martinez, Lee M Christensen, Amy Vogel, Hanna Suominen, Wendy W Chapman, and Guergana K Savova. 2013. Task 1: ShARe/CLEF ehealth evaluation lab 2013. In CLEF. Nicky Ringland, Xiang Dai, Ben Hachey, Sarvnaz Karimi, Cecile Paris, and James R. Curran. 2019. NNE: A dataset for nested named entity recognition 5870 in english newswire. In ACL, pages 5176–5181, Florence, Italy. Abeed Sarker, Rachel Ginn, Azadeh Nikfarjam, Karen O’Connor, Karen Smith, Swetha Jayaraman, Tejaswi Upadhaya, and Graciela Gonzalez. 2015. Utilizing social media data for pharmacovigilance: a review. J Biomed Inform, 54:202–212. Wei Shen, Jianyong Wang, and Jiawei Han. 2015. Entity linking with a knowledge base: Issues, techniques, and solutions. IEEE Trans. Knowl. Data Eng., 27(2):443–460. Mohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In EMNLP, pages 2843–2849, Brussels, Belgium. Gabriel Stanovsky, Daniel Gruhl, and Pablo Mendes. 2017. Recognizing mentions of adverse drug reaction in social media using knowledge-infused recurrent models. In EACL, pages 142–151, Valencia, Spain. Pontus Stenetorp, Sampo Pyysalo, Goran Topi´c, Tomoko Ohta, Sophia Ananiadou, and Jun’ichi Tsujii. 2012. brat: a web-based tool for NLP-assisted text annotation. In EACL, pages 102–107, Avignon, France. Buzhou Tang, Hongxin Cao, Yonghui Wu, Min Jiang, and Hua Xu. 2013a. Recognizing clinical entities in hospital discharge summaries using structural support vector machines with word representation features. In BMC Med Inform Decis Mak, volume 13, page S1. Buzhou Tang, Jianglu Hu, Xiaolong Wang, and Qingcai Chen. 2018. Recognizing continuous and discontinuous adverse drug reaction mentions from social media using LSTM-CRF. Wirel Commun Mob Com, 2018. Buzhou Tang, Yonghui Wu, Min Jiang, and Joshua C. Denny. 2013b. Recognizing and encoding disorder concepts in clinical text using machine learning and vector space model. In CLEF. Bailin Wang and Wei Lu. 2019. Combining spans into entities: A neural two-stage approach for recognizing discontiguous entities. In EMNLP, pages 6215– 6223, Hong Kong, China. Bailin Wang, Wei Lu, Yu Wang, and Hongxia Jin. 2018a. A neural transition-based model for nested mention recognition. In EMNLP, pages 1011–1017, Brussels, Belgium. Yanshan Wang, Liwei Wang, Majid Rastegar-Mojarad, Sungrim Moon, Feichen Shen, Naveed Afzal, Sijia Liu, Yuqun Zeng, Saeed Mehrabi, Sunghwan Sohn, and Hongfang Liu. 2018b. Clinical information extraction applications: A literature review. J Biomed Inform, 77:34–49. Taro Watanabe and Eiichiro Sumita. 2015. Transitionbased neural constituent parsing. In ACL-IJCNLP, pages 1169–1179, Beijing, China. Mingbin Xu, Hui Jiang, and Sedtawut Watcharawittayakul. 2017. A local detection approach for named entity recognition and mention detection. In ACL, pages 1237–1247, Vancouver, Canada.
2020
520
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5871–5886 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5871 IMOJIE: Iterative Memory-Based Joint Open Information Extraction Keshav Kolluru1, Samarth Aggarwal1, Vipul Rathore1, Mausam1, and Soumen Chakrabarti2 1 Indian Institute of Technology Delhi [email protected], [email protected] [email protected], [email protected] 2 Indian Institute of Technology Bombay [email protected] Abstract While traditional systems for Open Information Extraction were statistical and rule-based, recently neural models have been introduced for the task. Our work builds upon CopyAttention, a sequence generation OpenIE model (Cui et al., 2018). Our analysis reveals that CopyAttention produces a constant number of extractions per sentence, and its extracted tuples often express redundant information. We present IMOJIE, an extension to CopyAttention, which produces the next extraction conditioned on all previously extracted tuples. This approach overcomes both shortcomings of CopyAttention, resulting in a variable number of diverse extractions per sentence. We train IMOJIE on training data bootstrapped from extractions of several non-neural systems, which have been automatically filtered to reduce redundancy and noise. IMOJIE outperforms CopyAttention by about 18 F1 pts, and a BERT-based strong baseline by 2 F1 pts, establishing a new state of the art for the task. 1 Introduction Extracting structured information from unstructured text has been a key research area within NLP. The paradigm of Open Information Extraction (OpenIE) (Banko et al., 2007) uses an open vocabulary to convert natural text to semi-structured representations, by extracting a set of (subject, relation, object) tuples. OpenIE has found wide use in many downstream NLP tasks (Mausam, 2016) like multi-document question answering and summarization (Fan et al., 2019), event schema induction (Balasubramanian et al., 2013) and word embedding generation (Stanovsky et al., 2015). Traditional OpenIE systems are statistical or rule-based. They are largely unsupervised in nature, or bootstrapped from extractions made by earlier systems. They often consist of several components like POS tagging, and syntactic parsing. To bypass error accumulation in such pipelines, end-to-end neural systems have been proposed recently. Recent neural OpenIE methods belong to two categories: sequence labeling, e.g., RnnOIE (Stanovsky et al., 2018) and sequence generation, e.g., CopyAttention (Cui et al., 2018). In principle, generation is more powerful because it can introduce auxiliary words or change word order. However, our analysis of CopyAttention reveals that it suffers from two drawbacks. First, it does not naturally adapt the number of extractions to the length or complexity of the input sentence. Second, it is susceptible to stuttering: extraction of multiple triples bearing redundant information. These limitations arise because its decoder has no explicit mechanism to remember what parts of the sentence have already been ‘consumed’ or what triples have already been generated. Its decoder uses a fixed-size beam for inference. However, beam search can only ensure that the extractions are not exact duplicates. In response, we design the first neural OpenIE system that uses sequential decoding of tuples conditioned on previous tuples. We achieve this by adding every generated extraction so far to the encoder. This iterative process stops when the EndOfExtractions tag is generated by the decoder, allowing it to produce a variable number of extractions. We name our system Iterative MemOry Joint Open Information Extraction (IMOJIE). CopyAttention uses a bootstrapping strategy, where the extractions from OpenIE-4 (Christensen et al., 2011; Pal and Mausam, 2016) are used as training data. However, we believe that training on extractions of multiple systems is preferable. For example, OpenIE-4 benefits from high precision compared to ClausIE (Del Corro and Gemulla, 2013), which offers high recall. By aggregating extractions from both, IMOJIE could potentially 5872 Sentence He was appointed Commander of the Order of the British Empire in the 1948 Queen’s Birthday Honours and was knighted in the 1953 Coronation Honours . CopyAttention ( He ; was appointed ; Commander ... Birthday Honours ) ( He ; was appointed ; Commander ... Birthday Honours and was knighted ... Honours ) ( Queen ’s Birthday Honours ; was knighted ; in the 1953 Coronation Honours ) ( He ; was appointed ; Commander of the Order of the British Empire in the 1948 ) ( the 1948 ; was knighted ; in the 1953 Coronation Honours) IMOJIE ( He ; was appointed ; Commander of the Order ... Birthday Honours ) ( He ; was knighted ; in the 1953 Coronation Honours ) Table 1: IMOJIE vs. CopyAttention. CopyAttention suffers from stuttering, which IMOJIE does not. Sentence Greek and Roman pagans , who saw their relations with the gods in political and social terms , scorned the man who constantly trembled with fear at the thought of the gods , as a slave might fear a cruel and capricious master . OpenIE-4 ( the man ; constantly trembled ; ) IMOJIE ( a slave ; might fear ; a cruel and capricious master ) ( Greek and Roman pagans ; scorned ; the man who ... capricious master ) ( the man ; constantly trembled ; with fear at the thought of the gods ) ( Greek and Roman pagans ; saw ; their relations with the gods in political and social terms ) Table 2: IMOJIE vs. OpenIE-4. Pipeline nature of OpenIE-4 can get confused by long convoluted sentences, but IMOJIE responds gracefully. obtain a better precision-recall balance. However, simply concatenating extractions from multiple systems does not work well, as it leads to redundancy as well as exaggerated noise in the dataset. We devise an unsupervised Score-andFilter mechanism to automatically select a subset of these extractions that are non-redundant and expected to be of high quality. Our approach scores all extractions with a scoring model, followed by filtering to reduce redundancy. We compare IMOJIE against several neural and non-neural systems, including our extension of CopyAttention that uses BERT (Devlin et al., 2019) instead of an LSTM at encoding time, which forms a very strong baseline. On the recently proposed CaRB metric, which penalizes redundant extractions (Bhardwaj et al., 2019), IMOJIE outperforms CopyAttention by about 18 pts in F1 and our strong BERT baseline by 2 pts, establishing a new state of the art for OpenIE. We release IMOJIE & all related resources for further research1. In summary, our contributions are: • We propose IMOJIE, a neural OpenIE system that generates the next extraction, fully conditioned on the extractions produced so far. IMOJIE produce a variable number of diverse extractions for a sentence, • We present an unsupervised aggregation scheme to bootstrap training data by combining extractions from multiple OpenIE systems. • IMOJIE trained on this data establishes a new 1https://github.com/dair-iitd/imojie SoTA in OpenIE, beating previous systems and also our strong BERT-baseline. 2 Related Work Open Information Extraction (OpenIE) involves extracting (arg1 phrase, relation phrase, arg2 phrase) assertions from a sentence. Traditional open extractors are rule-based or statistical, e.g., Textrunner (Banko et al., 2007), ReVerb (Fader et al., 2011; Etzioni et al., 2011), OLLIE (Mausam et al., 2012), Stanford-IE (Angeli et al., 2015), ClausIE (Del Corro and Gemulla, 2013), OpenIE4 (Christensen et al., 2011; Pal and Mausam, 2016), OpenIE-5 (Saha et al., 2017, 2018), PropS (Stanovsky et al., 2016), and MinIE (Gashteovski et al., 2017). These use syntactic or semantic parsers combined with rules to extract tuples from sentences. Recently, to reduce error accumulation in these pipeline systems, neural OpenIE models have been proposed. They belong to one of two paradigms: sequence labeling or sequence generation. Sequence Labeling involves tagging each word in the input sentence as belonging to the subject, predicate, object or other. The final extraction is obtained by collecting labeled spans into different fields and constructing a tuple. RnnOIE (Stanovsky et al., 2018) is a labeling system that first identifies the relation words and then uses sequence labelling to get their arguments. It is trained on OIE2016 dataset, which postprocesses SRL data for OpenIE (Stanovsky and Dagan, 2016). 5873 Figure 1: One step of the sequential decoding process, for generating the ith extraction, which takes the original sentence and all extractions numbered 1, . . . , i −1, previously generated, as input. SenseOIE (Roy et al., 2019), improves upon RnnOIE by using the extractions of multiple OpenIE systems as features in a sequence labeling setting. However, their training requires manually annotated gold extractions, which is not scalable for the task. This restricts SenseOIE to train on a dataset of 3,000 sentences. In contrast, our proposed Score-and-Filter mechanism is unsupervised and can scale unboundedly. Jiang et al. (2019) is another labeling system that better calibrates extractions across sentences. SpanOIE (Zhan and Zhao, 2020) uses a span selection model, a variant of the sequence labelling paradigm. Firstly, the predicate module finds the predicate spans in a sentence. Subsequently, the argument module outputs the arguments for this predicate. However, SpanOIE cannot extract nominal relations. Moreover, it bootstraps its training data over a single OpenIE system only. In contrast, IMOJIE overcomes both of these limitations. Sequence Generation uses a Seq2Seq model to generate output extractions one word at a time. The generated sequence contains field demarcators, which are used to convert the generated flat sequence to a tuple. CopyAttention (Cui et al., 2018) is a neural generator trained over bootstrapped data generated from OpenIE-4 extractions on a large corpus. During inference, it uses beam search to get the predicted extractions. It uses a fixed-size beam, limiting it to output a constant number of extractions per sentence. Moreover, our analysis shows that CopyAttention extractions severely lack in diversity, as illustrated in Table 1. Sun et al. (2018) propose the Logician model, a restricted sequence generation model for extracting tuples from Chinese text. Logician relies on coverage attention and gated-dependency attention, a language-specific heuristic for Chinese. Using coverage attention, the model also tackles generation of multiple extractions while being globally-aware. We compare against Logician’s coverage attention as one of the approaches for increasing diversity. Sequence-labeling based models lack the ability to change the sentence structure or introduce new auxiliary words while uttering predictions. For example, they cannot extract (Trump, is the President of, US) from “US President Trump”, since ‘is’, ‘of’ are not in the original sentence. On the other hand, sequence-generation models are more general and, in principle, need not suffer from these limitations. Evaluation: All neural models have shown improvements over the traditional systems using the OIE2016 benchmark. However, recent work shows that the OIE2016 dataset is quite noisy, and that its evaluation does not penalize highly redundant extractions (L´echelle et al., 2018). In our work, we use the latest CaRB benchmark, which crowdsources a new evaluation dataset, and also provides a modified evaluation framework to downscore near-redundant extractions (Bhardwaj et al., 2019). 3 Sequential Decoding We now describe IMOJIE, our generative approach that can output a variable number of diverse extractions per sentence. The architecture of our model is illustrated in Figure 1. At a high level, the next extraction from a sentence is best determined in context of all other tuples extracted from it so far. Hence, IMOJIE uses a decoding strategy that generates extractions in a sequential fashion, one after another, each one being aware of all the ones generated prior to it. This kind of sequential decoding is made possible by the use of an iterative memory. Each of the generated extractions are added to the memory so that the next iteration of decoding has access to all of the previous extractions. We simulate this iterative memory with the help of BERT encoder, whose input includes the [CLS] token and original 5874 Figure 2: Ranking-Filtering subsystem for combining extractions from multiple open IE systems in an unsupervised fashion. (‘Exts’=extractions.) sentence appended with the decoded extractions so far, punctuated by the separator token [SEP] before each extraction. IMOJIE uses an LSTM decoder, which is initialized with the embedding of [CLS] token. The contextualized-embeddings of all the word tokens are used for the Copy (Gu et al., 2016) and Attention (Bahdanau et al., 2015) modules. The decoder generates the tuple one word at a time, producing ⟨rel⟩and ⟨obj⟩tokens to indicate the start of relation and object respectively. The iterative process continues until the EndOfExtractions token is generated. The overall process can be summarized as: 1. Pass the sentence through the Seq2Seq architecture to generate the first extraction. 2. Concatenate the generated extraction with the existing input and pass it again through the Seq2Seq architecture to generate the next extraction. 3. Repeat Step 2 until the EndOfExtractions token is generated. IMOJIE is trained using a cross-entropy loss between the generated output and the gold output. 4 Aggregating Bootstrapped Data 4.1 Single Bootstrapping System To train generative neural models for the task of OpenIE, we need a set of sentence-extraction pairs. It is ideal to curate such a training dataset via human annotation, but that is impractical, considering the scale of training data required for a neural model. We follow Cui et al. (2018), and use bootstrapping — using extractions from a pre-existing OpenIE system as ‘silver’-labeled (as distinct from ‘gold’-labeled) instances to train the neural model. We first order all extractions in the decreasing order of confidences output by the original system. We then construct training data in IMOJIE’s inputoutput format, assuming that this is the order in which it should produce its extractions. 4.2 Multiple Bootstrapping Systems Different OpenIE systems have diverse quality characteristics. For example, the human-estimated (precision, recall) of OpenIE-4 is (61, 43) while that of ClausIE is (40, 50). Thus, by using their combined extractions as the bootstrapping dataset, we might potentially benefit from the high precision of OpenIE-4 and high recall of ClausIE. However, simply pooling all extractions would not work, because of the following serious hurdles. No calibration: Confidence scores assigned by different systems are not calibrated to a comparable scale. Redundant extractions: Beyond exact duplicates, multiple systems produce similar extractions with low marginal utility. Wrong extractions: Pooling inevitably pollutes the silver data and can amplify incorrect instances, forcing the downstream open IE system to learn poor-quality extractions. We solve these problems using a Score-and-Filter framework, shown in Figure 2. Scoring: All systems are applied on a given sentence, and the pooled set of extractions are scored such that good (correct, informative) extractions generally achieve higher values compared to bad (incorrect) and redundant ones. In principle, this score may be estimated by the generation score from IMOJIE, trained on a single system. In practice, such a system is likely to consider extractions similar to its bootstrapping training data as good, while disregarding extractions of other systems, even though those extractions may also be of high quality. To mitigate this bias, we use an IMOJIE model, pre-trained on a random bootstrapping dataset. The random bootstrapping dataset is generated by picking extractions for each sentence randomly from any one of the bootstrapping systems being aggregated. We assign a score to each extraction in the pool based on the confidence value given to it by this IMOJIE (Random) model. Filtering: We now filter this set of extractions for 5875 redundancy. Given the set of ranked extractions in the pool, we wish to select that subset of extractions that have the best confidence scores (assigned by the random-boostrap model), while having minimum similarity to the other selected extractions. We model this goal as the selection of an optimal subgraph from a suitably designed complete weighted graph. Each node in the graph corresponds to one extraction in the pool. Every pair of nodes (u, v) are connected by an edge. Every edge has an associated weight R(u, v) signifying the similarity between the two corresponding extractions. Each node u is assigned a score f(u) equal to the confidence given by the random-bootstrap model. Given this graph G = (V, E) of all pooled extractions of a sentence, we aim at selecting a subgraph G′ = (V ′, E′) with V ′ ⊆V , such that the most significant ones are selected, whereas the extractions redundant with respect to already-selected ones are discarded. Our objective is max G′⊆G |V ′| X i=1 f(ui) − |V ′|−1 X j=1 |V ′| X k=j+1 R(uj, uk), (1) where ui represents node i ∈V ′. We compute R(u, v) as the ROUGE2 score between the serialized triples represented by nodes u and v. We can intuitively understand the first term as the aggregated sum of significance of all selected triples and second term as the redundancy among these triples. If G has n nodes, we can pose the above objective as: max x∈{0,1}n x⊤f −x⊤Rx, (2) where f ∈Rn representing the node scores, i.e., f[i] = f(ui), and R ∈Rn×n is a symmetric matrix with entries Rj,k = ROUGE2(uj, uk). x is the decision vector, with x[i] indicating whether a particular node ui ∈V ′ or not. This is an instance of Quadratic Boolean Programming and is NP-hard, but in our application n is modest enough that this is not a concern. We use the QPBO (Quadratic Pseudo Boolean Optimizer) solver2 (Rother et al., 2007) to find the optimal x∗and recover V ′. 5 Experimental Setup 5.1 Training Data Construction We obtain our training sentences by scraping Wikipedia, because Wikipedia is a comprehensive source of informative text from diverse domains, 2https://pypi.org/project/thinqpbo/ rich in entities and relations. Using sentences from Wikipedia ensures that our model is not biased towards data from any single domain. We run OpenIE-43, ClausIE4 and RnnOIE5 on these sentences to generate a set of OpenIE tuples for every sentence, which are then ranked and filtered using our Score-and-Filter technique. These tuples are further processed to generate training instances in IMOJIE’s input-output format. Each sentence contributes to multiple (input, output) pairs for the IMOJIE model. The first training instance contains the sentence itself as input and the first tuple as output. For example, (“I ate an apple and an orange.”, “I; ate; an apple”). The next training instance, contains the sentence concatenated with previous tuple as input and the next tuple as output (“I ate an apple and an orange. [SEP] I; ate; an apple”, “I; ate; an orange”). The final training instance generated from this sentence includes all the extractions appended to the sentence as input and EndOfExtractions token as the output. Every sentence gives the seq2seq learner one training instance more than the number of tuples. While forming these training instances, the tuples are considered in decreasing order of their confidence scores. If some OpenIE system does not provide confidence scores for extracted tuples, then the output order of the tuples may be used. 5.2 Dataset and Evaluation Metrics We use the CaRB data and evaluation framework (Bhardwaj et al., 2019) to evaluate the systems6 at different confidence thresholds, yielding a precision-recall curve. We identify three important summary metrics from the P-R curve. Optimal F1: We find the point in the P-R curve corresponding to the largest F1 value and report that. This is the operating point for getting extractions with the best precision-recall trade-off. AUC: This is the area under the P-R curve. This metric is useful when the downstream application can use the confidence value of the extraction. Last F1: This is the F1 score computed at the point of zero confidence. This is of importance when we cannot compute the optimal threshold, due to lack of any gold-extractions for the domain. 3https://github.com/knowitall/openie 4https://www.mpi-inf.mpg.de/clausie 5https://github.com/gabrielStanovsky/supervised-oie 6Our reported CaRB scores for OpenIE-4 and OpenIE-5 are slightly different from those reported by Bhardwaj et al. (2019). The authors of CaRB have verified our values. 5876 System Metric Opt. F1 AUC Last F1 Stanford-IE 23 13.4 22.9 OllIE 41.1 22.5 40.9 PropS 31.9 12.6 31.8 MinIE 41.9 -∗ 41.9 OpenIE-4 51.6 29.5 51.5 OpenIE-5 48.5 25.7 48.5 ClausIE 45.1 22.4 45.1 CopyAttention 35.4 20.4 32.8 RNN-OIE 49.2 26.5 49.2 Sense-OIE 17.2 -∗ 17.2 Span-OIE 47.9 -∗ 47.9 CopyAttention + BERT 51.6 32.8 49.6 IMOJIE 53.5 33.3 53.3 Table 3: Comparison of various OpenIE systems - nonneural, neural and proposed models. (*) Cannot compute AUC as Sense-OIE, MinIE do not emit confidence values for extractions and released code for Span-OIE does not provision calculation of confidence values. In these cases, we report the Last F1 as the Opt. F1 Many downstream applications of OpenIE, such as text comprehension (Stanovsky et al., 2015) and sentence similarity estimation (Christensen et al., 2014), use all the extractions output by the OpenIE system. Last F1 is an important measure for such applications. 5.3 Comparison Systems We compare IMOJIE against several nonneural baselines, including Stanford-IE, OpenIE-4, OpenIE-5, ClausIE, PropS, MinIE, and OLLIE. We also compare against the sequence labeling baselines of RnnOIE, SenseOIE, and the span selection baseline of SpanOIE. Probably the most closely related baseline to us is the neural generation baseline of CopyAttention. To increase CopyAttention’s diversity, we compare against an English version of Logician, which adds coverage attention to a singledecoder model that emits all extractions one after another. We also compare against CopyAttention augmented with diverse beam search (Vijayakumar et al., 2018) — it adds a diversity term to the loss function so that new beams have smaller redundancy with respect to all previous beams. Finally, because our model is based on BERT, we reimplement CopyAttention with a BERT encoder — this forms a very strong baseline for our task. 5.4 Implementation We implement IMOJIE in the AllenNLP framework7 (Gardner et al., 2018) using Pytorch 1.2. We use “BERT-small” model for faster training. Other 7https://github.com/allenai/allennlp System Metric Opt. F1 AUC Last F1 CopyAttention 35.4 20.4 32.8 CoverageAttention 41.8 22.1 41.8 CoverageAttention+BERT 47.9 27.9 47.9 Diverse Beam Search 46.1 26.1 39.6 IMOJIE (w/o BERT) 37.9 19.1 36.6 IMOJIE 53.2 33.1 52.4 Table 4: Models to solve the redundancy issue prevalent in Generative Neural OpenIE systems. All systems are bootstrapped on OpenIE-4. Bootstrapping Metric Systems Opt. F1 AUC Last F1 ClausIE 49.2 31.4 45.5 RnnOIE 51.3 31.1 50.8 OpenIE-4 53.2 33.1 52.4 OpenIE-4+ClausIE 51.5 32.5 47.1 OpenIE-4+RnnOIE 53.1 32.1 53.0 ClausIE+RnnOIE 50.9 32.2 49.8 All 53.5 33.3 53.3 Table 5: IMOJIE trained with different combinations of bootstrapping data from 3 systems - OpenIE-4, ClausIE, RNNOIE. Graph filtering is not used over single datasets. hyper-parameters include learning rate for BERT, set to 2 × 10−5, and learning rate, hidden dimension, and word embedding dimension of the decoder LSTM, set to (10−3, 256, 100), respectively. Since the model or code of CopyAttention (Cui et al., 2018) were not available, we implemented it ourselves. Our implementation closely matches their reported scores, achieving (F1, AUC) of (56.4, 47.7) on the OIE2016 benchmark. 6 Results and Analysis 6.1 Performance of Existing Systems How well do the neural systems perform as compared to the rule-based systems? Using CaRB evaluation, we find that, contrary to previous papers, neural OpenIE systems are not necessarily better than prior non-neural systems (Table 3). Among the systems under consideration, the best non-neural system reached Last F1 of 51.5, whereas the best existing neural model could only reach 49.2. Deeper analysis reveals that CopyAttention produces redundant extractions conveying nearly the same information, which CaRB effectively penalizes. RnnOIE performs much better, however suffers due to its lack of generating auxilliary verbs and implied prepositions. Example, it can only generate (Trump; President; US) instead of (Trump; is President of; US) from the sentence 5877 Filtering Metric Opt. F1 AUC Last F1 None 49.7 34.5 37.4 Extraction-based 46 29.2 44.9 Sentence-based 49.5 32.7 48.6 Score-And-Filter 53.5 33.3 53.3 Table 6: Performance of IMOJIE on aggregated dataset OpenIE-4+ClausIE+RnnOIE, with different filtering techniques. For comparison, SenseOIE trained on multiple system extractions gives an F1 of 17.2 on CaRB. Figure 3: Precision-Recall curve of OpenIE Systems. “US President Trump...”. Moreover, it is trained only on limited number of pseudo-gold extractions, generated by Michael et al. (2018), which does not take advantage of boostrapping techniques. 6.2 Performance of IMOJIE How does IMOJIE perform compared to the previous neural and rule-based systems? In comparison with existing neural and nonneural systems, IMOJIE trained on aggregated bootstrapped data performs the best. It outperforms OpenIE-4, the best existing OpenIE system, by 1.9 F1 pts, 3.8 pts of AUC, and 1.8 pts of Last-F1. Qualitatively, we find that it makes fewer mistakes than OpenIE-4, probably because OpenIE-4 accumulates errors from upstream parsing modules (see Table 2). IMOJIE outperforms CopyAttention by large margins – about 18 Optimal F1 pts and 13 AUC pts. Qualitatively, it outputs non-redundant extractions through the use of its iterative memory (see Table 1), and a variable number of extractions owing to the EndofExtractions token. It also outperforms CopyAttention with BERT, which is a very strong baseline, by 1.9 Opt. F1 pts, 0.5 AUC and 3.7 Last F1 pts. IMOJIE consistently outperforms CopyAttention with BERT over different bootstrapping datasets (see Table 8). Figure 3 shows that the precision-recall curve of IMOJIE is consistently above that of existing OpenIE systems, emphasizing that IMOJIE is consistently better than them across the different confidence thresholds. We do find that CopyAttention+BERT outputs slightly higher recall at a significant loss of precision (due to its beam search with constant size), which gives it some benefit in the overall AUC. CaRB evaluation of SpanOIE8 results in (precision, recall, F1) of (58.9, 40.3, 47.9). SpanOIE sources its training data only from OpenIE-4. In order to be fair, we compare it against IMOJIE trained only on data from OpenIE-4 which evaluates to (60.4, 46.3, 52.4). Hence, IMOJIE outperforms SpanOIE, both in precision and recall. Attention is typically used to make the model focus on words which are considered important for the task. But the IMOJIE model successfully uses attention to forget certain words, those which are already covered. Consider, the sentence “He served as the first prime minister of Australia and became a founding justice of the High Court of Australia”. Given the previous extraction (He; served; as the first prime minister of Australia), the BERTs attention layers figure out that the words ‘prime’ and ‘minister’ have already been covered, and thus push the decoder to prioritize ‘founding’ and ‘justice’. Appendix D analyzes the attention patterns of the model when generating the intermediate extraction in the above example and shows that IMOJIE gives less attention to already covered words. 6.3 Redundancy What is the extent of redundancy in IMOJIE when compared to earlier OpenIE systems? We also investigate other approaches to reduce redundancy in CopyAttention, such as Logician’s coverage attention (with both an LSTM and a BERT encoder) as well as diverse beam search. Table 4 reports that both these approaches indeed make significant improvements on top of CopyAttention scores. In particular, qualitative analysis of diverse beam search output reveals that the model gives out different words in different tuples in an effort to be diverse, without considering their correctness. Moreover, since this model uses beam search, it still outputs a fixed number of tuples. This analysis naturally suggested the IMOJIE (w/o BERT) model — an IMOJIE variation that uses an LSTM encoder instead of BERT. Un8https://github.com/zhanjunlang/Span OIE 5878 Extractions Metric MNO IOU #Tuples CopyAttention+BERT 2.805 0.463 3159 IMOJIE 1.282 0.208 1620 Gold 1.927 0.31 2650 Table 7: Measuring redundancy of extractions. MNO stands for Mean Number of Occurrences. IOU stands for Intersection over Union. fortunately, IMOJIE (w/o BERT) is behind the CopyAttention baseline by 12.1 pts in AUC and 4.4 pts in Last F1. We hypothesize that this is because the LSTM encoder is unable to learn how to capture inter-fact dependencies adequately — the input sequences are too long for effectively training LSTMs. This explains our use of Transformers (BERT) instead of the LSTM encoder to obtain the final form of IMOJIE. With a better encoder, IMOJIE is able to perform up to its potential, giving an improvement of (17.8, 12.7, 19.6) pts in (Optimal F1, AUC, Last F1) over existing seq2seq OpenIE systems. We further measure two quantifiable metrics of redundancy: Mean Number of Occurrences (MNO): The average number of tuples, every output word appears in. Intersection Over Union (IOU): Cardinality of intersection over cardinality of union of words in the two tuples, averaged over all pairs of tuples. These measures were calculated after removing stop words from tuples. Higher value of these measures suggest higher redundancy among the extractions. IMOJIE is significantly better than CopyAttention+BERT, the strongest baseline, on both these measures (Table 7). Interestingly, IMOJIE has a lower redundancy than even the gold triples; this is due to imperfect recall. 6.4 The Value of Iterative Memory To what extent does the IMOJIE style of generating tuples improve performance, over and above the use of BERT? We add BERT to CopyAttention model to generate another baseline for a fair comparison against the IMOJIE model. When trained only on OpenIE4, IMOJIE continues to outperform CopyAttention+BERT baseline by (1.6, 0.3, 2.8) pts in (Optimal F1, AUC, Last F1), which provides strong evidence that the improvements are not solely by virtue of using a better encoder. We repeat this experiment over different (single) bootstrapping datasets. Table 8 depicts that IMOJIE consistently outperforms CopyAttention+BERT model. We also note that the order in which the extractions are presented to the model (during training) is indeed important. On training IMoJIE using a randomized-order of extractions, we find a decrease of 1.6 pts in AUC (averaged over 3 runs). 6.5 The value of Score-and-Filter To what extent does the scoring and filtering approach lead to improvement in performance? IMOJIE aggregates extractions from multiple systems through the scoring and filtering approach. It uses extractions from OpenIE-4 (190K), ClausIE (202K) and RnnOIE (230K) to generate a set of 215K tuples. Table 6 reports that IMOJIE does not perform well when this aggregation mechanism is turned off. We also try two supervised approaches to aggregation, by utilizing the gold extractions from CaRB’s dev set. • Extraction Filtering: For every sentence-tuple pair, we use a binary classifier that decides whether or not to consider that extraction. The input features of the classifier are the [CLS]embeddings generated from BERT after processing the concatenated sentence and extraction. The classifier is trained over tuples from CaRB’s dev set. • Sentence Filtering: We use an IMOJIE model (bootstrapped over OpenIE-4), to score all the tuples. Then, a Multilayer Perceptron (MLP) predicts a confidence threshold to perform the filtering. Only extractions with scores greater than this threshold will be considered. The input features of the MLP include the length of sentence, IMOJIE (OpenIE-4) scores, and GPT (Radford et al., 2018) scores of each extraction. This MLP is trained over sentences from CaRB’s dev set and the gold optimal confidence threshold calculated by CaRB. We observe that the Extraction, Sentence Filtering are better than no filtering by by 7.5, 11.2 pts in Last F1, but worse at Opt. F1 and AUC. We hypothesise that this is because the training data for the MLP (640 sentences in CaRB’s dev set), is not sufficient and the features given to it are not sufficiently discriminative. Thereby, we see the value of our unsupervised Score-and-Filter that improves the performance of IMOJIE by (3.8, 15.9) pts in 5879 System Bootstrapping System OpenIE-4 OpenIE-5 ClausIE RnnOIE Base 50.7, 29, 50.7 47.4, 25.1, 47.4 45.1, 22.4, 45.1 49.2, 26.5, 49.2 CopyAttention+BERT 51.6, 32.8, 49.6 48.7, 29.4, 48.0 47.4, 30.2, 43.6 47.9, 30.6, 41.1 IMOJIE 53.2, 33.1, 52.4 48.8, 27.9, 48.7 49.2, 31.4, 45.5 51.3, 31.1, 50.8 Table 8: Evaluating models trained with different bootstrapping systems. (Optimal F1, Last F1). The 1.2 pt decrease in AUC is due to the fact that the IMOJIE (no filtering) produces many low-precision extractions, that inflates the AUC. Table 5 suggests that the model trained on all three aggregated datasets perform better than models trained on any of the single/doubly-aggregated datasets. Directly applying the Score-and-Filter method on the test-extractions of RnnOIE+OpenIE4+ClausIE gives (Optimal F1, AUC, Last F1) of (50.1, 32.4, 49.8). This shows that training the model on the aggregated dataset is important. Computational Cost: The training times for CopyAttention+BERT, IMOJIE (OpenIE-4) and IMOJIE (including the time taken for Score-and-Filter) are 5 hrs, 13 hrs and 30 hrs respectively. This shows that the performance improvements come with an increased computational cost, and we leave it to future work to improve the computational efficiency of these models. 7 Error Analysis We randomly selected 50 sentences from the CaRB validation set. We consider only sentences where at least one of its extractions shows the error. We identified four major phenomena contributing to errors in the IMOJIE model: (1) Missing information: 66% of the sentences have at least one of the relations or arguments or both missing in predicted extractions, which are present in gold extractions. This leads to incomplete information. (2) Incorrect demarcation: Extractions in 60% of the sentences have the separator between relation and argument identified at the wrong place. (3) Missing conjunction splitting: In 32% of the sentences, our system fails to separate out extractions by splitting a conjunction. E.g., in the sentence “US 258 and NC 122 parallel the river north . . . ”, IMOJIE predicts just one extraction (US 258 and NC 122; parallel; ...) as opposed to two separate extractions (US 258; parallel; ...) and (NC 122; parallel; . . . ) as in gold. (4) Grammatically incorrect extractions: 38% sentences have a grammatically incorrect extraction (when serialized into a sentence). Additionally, we observe 12% sentences still suffering from redundant extractions and 4% miscellaneous errors. 8 Conclusions and Discussion We propose IMOJIE for the task of OpenIE. IMOJIE significantly improves upon the existing OpenIE systems in all three metrics, Optimal F1, AUC, and Last F1, establishing a new State Of the Art system. Unlike existing neural OpenIE systems, IMOJIE produces non-redundant as well as a variable number of OpenIE tuples depending on the sentence, by iteratively generating them conditioned on the previous tuples. Additionally, we also contribute a novel technique to combine multiple OpenIE datasets to create a high-quality dataset in a completely unsupervised manner. We release the training data, code, and the pretrained models.9 IMOJIE presents a novel way of using attention for text generation. Bahdanau et al. (2015) showed that attending over the input words is important for text generation. See et al. (2017) showed that using a coverage loss to track the attention over the decoded words improves the quality of the generated output. We add to this narrative by showing that deep inter-attention between the input and the partially-decoded words (achieved by adding previous output in the input) creates a better representation for iterative generation of triples. This general observation may be of independent interest beyond OpenIE, such as in text summarization. Acknowledgements Mausam is supported by IBM AI Horizons Network grant, an IBM SUR award, grants by Google, Bloomberg and 1MG, and a Visvesvaraya faculty award by Govt. of India. We thank IIT Delhi HPC facility for compute resources. Soumen is supported by grants from IBM and Amazon. We would like to thank Arpita Roy for sharing the extractions of SenseOIE with us. 9https://github.com/dair-iitd/imojie 5880 References Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging Linguistic Structure for Open Domain Information Extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL), 2015, pages 344– 354. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In International Conference on Learning Representations (ICLR), 2015. Niranjan Balasubramanian, Stephen Soderland, Mausam, and Oren Etzioni. 2013. Generating Coherent Event Schemas at Scale. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2013, pages 1721–1731. Michele Banko, Michael J Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In International Joint Conference on Artificial Intelligence (IJCAI), 2007, volume 7, pages 2670–2676. Sangnie Bhardwaj, Samarth Aggarwal, and Mausam. 2019. CaRB: A Crowdsourced Benchmark for OpenIE. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pages 6263–6268. Janara Christensen, Mausam, Stephen Soderland, and Oren Etzioni. 2011. An analysis of open information extraction based on semantic role labeling. In Proceedings of the sixth international conference on Knowledge capture, pages 113–120. ACM. Janara Christensen, Stephen Soderland, Gagan Bansal, et al. 2014. Hierarchical summarization: Scaling up multi-document summarization. In Proceedings of the 52nd annual meeting of the association for computational linguistics (volume 1: Long papers), pages 902–912. Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. In Proceedings of Association for Computational Linguistics (ACL), 2018, pages 407–413. Luciano Del Corro and Rainer Gemulla. 2013. ClausIE: clause-based open information extraction. In Proceedings of the 22nd international conference on World Wide Web (WWW), 2013, pages 355–366. ACM. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Oren Etzioni, Anthony Fader, Janara Christensen, Stephen Soderland, and Mausam. 2011. Open Information Extraction: The Second Generation. In IJCAI 2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence, Barcelona, Catalonia, Spain, July 16-22, 2011, pages 3–10. IJCAI/AAAI. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying Relations for Open Information Extraction. In Proceedings of the Conference of Empirical Methods in Natural Language Processing (EMNLP ’11), Edinburgh, Scotland, UK. Angela Fan, Claire Gardent, Chlo´e Braud, and Antoine Bordes. 2019. Using Local Knowledge Graph Construction to Scale Seq2Seq Models to MultiDocument Inputs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), 2019. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A Deep Semantic Natural Language Processing Platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1–6. Kiril Gashteovski, Rainer Gemulla, and Luciano del Corro. 2017. MinIE: minimizing facts in open information extraction. In Association for Computational Linguistics (ACL), 2017. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating Copying Mechanism in Sequence-to-Sequence Learning. In Proceedings of Association for Computational Linguistics (ACL), 2016. Association for Computational Linguistics. Zhengbao Jiang, Pengcheng Yin, and Graham Neubig. 2019. Improving Open Information Extraction via Iterative Rank-Aware Learning. In Proceedings of the Association for Computational Linguistics (ACL), 2019. William L´echelle, Fabrizio Gotti, and Philippe Langlais. 2018. Wire57 : A fine-grained benchmark for open information extraction. In LAW@ACL. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems (NIPS), 2019, pages 13–23. 5881 Mausam. 2016. Open information extraction systems and downstream applications. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI), 2016, pages 4074–4077. AAAI Press. Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, and Oren Etzioni. 2012. Open language learning for information extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 523–534. Association for Computational Linguistics. Julian Michael, Gabriel Stanovsky, Luheng He, Ido Dagan, and Luke Zettlemoyer. 2018. Crowdsourcing Question-Answer Meaning Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), 2018, Volume 2 (Short Papers), pages 560–568. Harinder Pal and Mausam. 2016. Demonyms and compound relational nouns in nominal OpenIE. In Proceedings of the 5th Workshop on Automated Knowledge Base Construction, pages 35–39. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training (2018). Carsten Rother, Vladimir Kolmogorov, Victor S. Lempitsky, and Martin Szummer. 2007. Optimizing Binary MRFs via Extended Roof Duality. 2007 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8. Arpita Roy, Youngja Park, Taesung Lee, and Shimei Pan. 2019. Supervising Unsupervised Open Information Extraction Models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 728–737. Swarnadeep Saha, Harinder Pal, and Mausam. 2017. Bootstrapping for numerical OpenIE. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 317–323. Association for Computational Linguistics. Swarnadeep Saha et al. 2018. Open information extraction from conjunctive sentences. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2288–2299. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Association for Computational Linguistics (ACL), 2017. Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), Austin, Texas. Association for Computational Linguistics. Gabriel Stanovsky, Jessica Ficler, Ido Dagan, and Yoav Goldberg. 2016. Getting more out of syntax with PropS. CoRR, abs/1603.01648. Gabriel Stanovsky, Mausam, and Ido Dagan. 2015. OpenIE as an intermediate structure for semantic tasks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 303–308. Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised Open Information Extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Volume 1 (Long Papers), pages 885–895. Mingming Sun, Xu Li, Xin Wang, Miao Fan, Yue Feng, and Ping Li. 2018. Logician: A unified end-toend neural approach for open-domain information extraction. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 556–564. Jesse Vig. 2019. A multiscale visualization of attention in the transformer model. In Proceedings of Association for Computational Linguistics (ACL), 2019. Ashwin K. Vijayakumar, Michael Cogswell, Ramprasaath R. Selvaraju, Qing Sun, Stefan Lee, David J. Crandall, and Dhruv Batra. 2018. Diverse Beam Search for Improved Description of Complex Scenes. In AAAI Conference on Artificial Intelligence, 2018, pages 7371–7379. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning (ICML), 2015, pages 2048–2057. Junlang Zhan and Hai Zhao. 2020. Span Model for Open Information Extraction on Accurate Corpus. In AAAI Conference on Artificial Intelligence, 2020, pages 5388–5399. 5882 IMOJIE: Iterative Memory-Based Joint Open Information Extraction (Supplementary Material) A Performance with varying sentence lengths In this experiment, we measure the performance of baseline and our models by testing on sentences of varying lengths. We partition the original CaRB test data into 6 datasets with sentences of lengths (9-16 words), (17-24 words), (25-32 words), (3340 words), (41-48 words) and (49-62 words) respectively. Note that the minimum and maximum sentence lengths are 9 and 62 respectively. We measure the Optimal F1 score of both Copy Attention + BERT and IMOJIE (Bootstrapped on OpenIE-4) on these partitions as depicted in Figure 4. We observe that the performance deteriorates with increasing sentence length which is expected as well. Also, for each of the partitions, IMOJIE marginally performs better as compared to Copy Attention + BERT. Figure 4: Measuring performance with varying input sentence lengths B Measuring Performance on Varying Beam Size We perform inference of the CopyAttention with BERT model on CaRB test set with beam sizes of 1, 3, 5, 7, and 11. We observe in Figure 5 that AUC increases with increasing beam size. A system can surge its AUC by adding several low confidence tuples to its predicted set of tuples. This adds low precision - high recall points to the Precision-Recall curve of the system leading to higher AUC. On the other hand, Last F1 experiences a drop at very high beam sizes, thereby capturing the decline in performance. Optimal F1 saturates at high beam sizes since its calculation ignores the extractions Figure 5: Measuring performance of CopyAttention with BERT model upon changing the beam size below the optimal confidence threshold. This analysis also shows the importance of using Last F1 as a metric for measuring the performance of OpenIE systems. C Evaluation on other datasets We use sentences from other benchmarks with the CaRB evaluation policy and we find similar improvements, as shown in Table 9. IMOJIE consistently outperforms our strongest baseline, CopyAttention with BERT, over different test sets. This confirms that IMOJIE is domain agnostic. D Visualizing Attention Attention has been used in a wide variety of settings to help the model learn to focus on important things (Bahdanau et al., 2015; Xu et al., 2015; Lu et al., 2019). However, the IMOJIE model is able to use attention to understand which words have already been generated, to focus on remaining words. In order to understand how the model achieves this, we visualize the learnt attention weights. There are two attention weights of importance, the learnt attention inside the BERT encoder and the attention between the decoder and encoder. We use BertViz (Vig, 2019) to visualize the attention inside BERT. We consider the following sentence as the running example - ”he served as the first prime minister of australia and became a founding justice of the high court of australia”. We visualize the attention after producing the first extraction - “he; served; as the first prime minister of australia”. Intuitively, we understand that the model must focus on the words “founding” and “justice” in order to generate the next extraction - “he; became; a founding justice of the high court of australia”. In Figure 8 and Figure 9 (where the left-hand column contains the words 5883 Model Dataset Wire57 Penn Web CopyAttention + BERT 45.60, 27.70, 39.70 18.20, 7.9, 12.40 30.10, 18.00, 14.60 IMOJIE 46.20, 26.60, 46.20 20.20, 8.70, 15.50 30.40, 15.50, 26.40 Table 9: Evaluation on other datasets with the CaRB evaluation strategy which are used to attend while right-hand column contains the words which are attended over), we see that the words “prime” and “minister” of the original sentence have high attention over the same words in the first extraction. But the attention for “founding” and “justice” are limited to the original sentence. Based on these patterns, the decoder is able to give a high attention to the words “founding” and “justice” (as shown in Figure 10), in-order to successfully generate the second extraction ”he; became; a founding justice of the high court of australia”. Figure 6: BERT attention for the word ‘founding’ 5884 Figure 7: BERT attention for the word ‘justice’ Figure 8: BERT attention for the word ‘prime’ 5885 Figure 9: BERT attention for the word ‘minister’ 5886 Figure 10: Attention weights for the decoder
2020
521
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5887–5897 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5887 Improving Event Detection via Open-domain Trigger Knowledge Meihan Tong1, Bin Xu1, Shuai Wang2, Yixin Cao3∗, Lei Hou1, Juanzi Li1 and Jun Xie4 1Knowledge Engineering Laboratory, Tsinghua University, Beijing, China 2SLP Group, AI Technology Department, JOYY Inc, China 3National University of Singapore, Singapore 4SPPD Group, Tencent Inc, China [email protected], [email protected] [email protected], [email protected] [email protected], [email protected] [email protected] Abstract Event Detection (ED) is a fundamental task in automatically structuring texts. Due to the small scale of training data, previous methods perform poorly on unseen/sparsely labeled trigger words and are prone to overfitting densely labeled trigger words. To address the issue, we propose a novel Enrichment Knowledge Distillation (EKD) model to leverage external open-domain trigger knowledge to reduce the in-built biases to frequent trigger words in annotations. Experiments on benchmark ACE2005 show that our model outperforms nine strong baselines, is especially effective for unseen/sparsely labeled trigger words. The source code is released on https://github.com/shuaiwa16/ekd.git. 1 Introduction Event Detection (ED) aims at detecting trigger words in sentences and classifying them into predefined event types, which shall benefit numerous applications, such as summarization (Li et al., 2019) and reading comprehension (Huang et al., 2019). For instance, in S1 of Figure 1, ED aims to identify the word fire as the event trigger and classify its event type as Attack. Mainstream researches (Chen et al., 2015; Liu et al., 2017, 2018b; Liao and Grishman, 2010b; Zhao et al., 2018; Liu et al., 2018a) focus on the second step event type disambiguation via lexical and contextual features. However, it is also crucial to identify trigger words correctly as the preliminary step. Trigger word identification is a non-trivial task, which suffers from the long tail issue. Take the benchmark ACE2005 as an example: trigger words with frequency less than 5 account for 78.2% of the ∗Corresponding author. Densely Labeled Triggers Unseen/Sparsely Labeled Triggers S1: Now we 're hearing the boom of Iraqi guns as they fireAttack towards our positions . S2: Troops were trying to break up stone-throwing protests , but did not use live fireAttack. S4: The intifadaAttack exploded in September 2000 S3: A man was hackedAttack to death by the criminal Trigger identified by Open-domain Trigger Knowledge Figure 1: Examples of ED. fire is the densely labeled trigger for Attack event in ACE2005. Hacked and intifada are the unseen/sparsely labeled triggers in the training corpus. The red ones illustrate the triggers identified by open-domain trigger knowledge. total. The long tail issue makes supervised methods (Li et al., 2013; Yang et al., 2019) prone to overfitting and perform poorly on unseen/sparsely labeled triggers (Lu et al., 2019). Automatically generating more training instances seems to be a solution: expanding more instances by bootstrapping (Ferguson et al., 2018; Zhang et al., 2019; Cao et al., 2019) and expending more data from distantly supervised methods (Chen et al., 2017; Wang et al., 2019a). However, the performance of these methods on unseen/sparsely labeled trigger words is still unsatisfied, as shown in Table 1. We argue that these methods either lead to the homogeneity of the generated corpus, or subject to the low coverage of knowledge base. More importantly, the expanded data itself is unevenly distributed, and we cannot expect to alleviate the long tail problem with built-in bias data. In the paper, we empower the model with external knowledge called Open-Domain Trigger Knowledge to provides extra semantic support on unseen/sparsely labeled trigger words and improve trigger identification. Open-Domain Trig5888 Table 1: F score on unseen/sparsely and densely labeled triggers. DMBERT (Chen et al., 2015) refers to a supervised-only model with dynamic multi-pooling to capture contextual features; BOOTSTRAP (He and Sun, 2017) expands training data via bootstrapping. DGBERT expands training data with Freebase (Chen et al., 2017). Method Unseen Sparse Dense DMBERTsup−only 54.4 72.5 84.1 BOOTSTRAPsemi−sup 56.6 73.6 86.9 DGBERTdistant−sup 54.7 72.8 84.3 ger Knowledge is defined as a prior that specifies which words can trigger events without subject to pre-defined event types and the domain of texts. As shown in S1 of Figure 1, open-domain trigger knowledge can identify that hearing and fire as event triggers, even if hearing does not fit into any pre-defined event types in ACE2005. With opendomain trigger knowledge, we are able to discover unseen/sparsely triggers from the large-scale unlabeled corpus, which will improve the recall in trigger words identification. However, it is challenging to incorporate open-domain trigger knowledge into ED: Triggers identified by open-domain trigger knowledge do not always fit well with indomain labels, and thus can not be directly adopted as the trigger identification result. For example in S4 of Figure 1, open-domain trigger knowledge argues that exploded is the trigger word, while under the labeling rules of ACE2005, intifada is the trigger word. Specifically, we propose an Enrichment Knowledge Distillation (EKD) model to efficiently distill open-domain trigger knowledge from both labeled and abundant unlabeled corpora. We first apply a light-weight pipeline to equipment unlabeled sentences with trigger knowledge from WordNet. The method is not limited to specific domains, and thus can guarantee the coverage of trigger words. Then, given the knowledge enhanced data as well as ED annotations, we train a teacher model for better performance; meanwhile, a student model is trained to mimic teacher’s outputs using data without knowledge enhancement, which conforms to the distribution during inference. We further promote the generalization of the model by adding noise to the inputs of the student model. We evaluate our model on the ACE2005 ED benchmark. Our method surpasses nine strong baselines, and is especially effective for unseen/sparsely labeled triggers word. Experiments also show that the proposed EKD architecture is very flexible, and can be conveniently adapted to distill other knowledge, such as entity, syntactic and argument. Our contributions can be summarized as: • To the best of our knowledge, we are the first to leverage the wealth of the open-domain trigger knowledge to improve ED. • We propose a novel teacher-student model (EKD) that can learn from both labeled and unlabeled data, so as to improve ED performance by reducing the in-built biases in annotations. • Experiments on benchmark ACE2005 show that our method surpasses nine strong baselines which are also enhanced with knowledge. Detailed studies show that our method can be conveniently adapted to distill other knowledge, such as entities. 2 Related Work 2.1 Event Detection Traditional feature-based methods exploit both lexical and global features to detect events (Li et al., 2013). As neural networks become popular in NLP (Cao et al., 2018), data-driven methods use various superior DMCNN, DLRNN and PLMEE model (Duan et al., 2017; Nguyen and Grishman, 2018; Yang et al., 2019) for end-to-end event detection. Recently, weakly-supervised methods (Judea and Strube, 2016; Huang et al., 2017; Zeng et al., 2018; Yang et al., 2018) has been proposed to generate more labeled data. (Gabbard et al., 2018) identifies informative snippets of text as expending annotated data via curated training. (Liao and Grishman, 2010a; Ferguson et al., 2018) rely on sophisticated pre-defined rules to bootstrap from the paralleling news streams. (Wang et al., 2019a) limits the data range of adversarial learning to trigger words appearing in labeled data. Due to the long tail issue of labeled data and the homogeneity of the generated data, previous methods perform badly on unseen/sparsely labeled data and turn to overfitting densely labeled data. With open-domain trigger knowledge, our model is able to perceive the unseen/sparsely labeled trigger words from abundant unlabeled data, and thus successfully improve the recall of the trigger words. 5889 2.2 Knowledge Distillation Knowledge Distillation, initially proposed by (Hinton et al., 2015), has been widely adopted in NLP to distill external knowledge into the model (Laine and Aila, 2016; Saito et al., 2017; Ruder and Plank, 2018). The main idea is to adopt a student model to learn from a robust pre-trained teacher model. (Lee et al., 2018; Gong et al., 2018) reinforces the connection between teacher and student model by singular value decomposition and the laplacian regularized least squares. (Tarvainen and Valpola, 2017; Huang et al., 2018) stabilize the teacher model by a lazy-updated mechanism to enable student model not susceptible to external disturbances. (Liu et al., 2019) uses an adversarial imitation approach to enhance the learning procedure. Unlike previous methods that relied on golden annotations, our method is able to learn from pseudo labels and effectively extract knowledge from both labeled and unlabeled corpus. 3 Methodology In the section, we introduce the proposed Enrichment Knowledge Distillation (EKD) model, which leverages open-domain trigger knowledge to improve ED. In general, we have a teacher model and a student model. The teacher is fully aware of open-domain trigger knowledge, while the student is not equipped with open-domain trigger knowledge. We make the student model to imitate the teacher’s prediction to distill the open-domain trigger knowledge to our model. Figure 2 illustrates the architecture of the proposed EKD model. During training, we first pre-train the teacher model on labeled data, and then force the student model, under the knowledge-absent situation, to generate pseudo labels as good as the teacher model on both labeled and unlabeled data. By increasing the cognitive gap between teacher and student model, the student model has to learn harder. We first introduce how to collect the opendomain trigger knowledge in Knowledge Collection. We then illustrate how to exploit the labeled data to pre-train the teacher model in Feature Extraction and Event Prediction. Finally, we elaborate on how to force the student model to learn from the teacher model in Knowledge Distillation. 3.1 Notation Given the labeled corpus L = {(Si, Yi)}NL i=1 and abundant unlabeled corpus U = {(Sk)}NT k=NL+1, our goal is to jointly optimize two objections: 1) maximize the prediction probability P(Yi|Si) on labeled corpus L, 2) minimize the prediction probability discrepancy between the teacher P(Y ′ k|S+ k ) and student model P(Y ′ k|S− k ) on both L and U, where NT stand for the total number of sentences in both labeled and unlabeled data. S+ and S− stand for the enhanced and weakened variant of the raw sentence S, we will explain them in detail in the Section 3.5. Y = {y1, y2, . . . , yn} stands for the golden event type label, where each y ∈Y belongs to the 33 event types pre-defined in ACE and a ”NEGATIVE” event type (Chen et al., 2015; Nguyen et al., 2016; Feng et al., 2018). Y ′ is the pseudo label proposed by pre-trained teacher model. 3.2 Knowledge Collection Open-domain trigger knowledge elaborates whether a word triggers an event from the perspective of word sense. Whether the trigger is densely labeled or unseen/sparsely labeled, open-domain trigger knowledge will identify them without distinction. For instance in S3 in Figure 1, although hacked is a rare word and has not been labeled, judging from word sense, open-domain trigger knowledge successfully identifies hacked as a trigger word. We adopt a light-weight pipeline method, called Trigger From WordNet (TFW), to collect opendomain trigger knowledge (Araki and Mitamura, 2018). S+ = TFW(S) (1) TFW uses WordNet as the intermediary. It has two steps, 1) disambiguate word into WordNet sense, 2) determine whether a sense triggers an event. For the first step, we adopt IMS (Zhong and Ng, 2010) to disambiguate word into word sense in WordNet (Miller et al., 1990). We obtain the input features by POS tagger and dependency parser in Stanford CoreNLP (Manning et al., 2014). For the second step, we adopt the simple dictionary-lookup approach proposed in (Araki and Mitamura, 2018) to determine whether a sense triggers an event. TFW is not limited to particular domains, which is able to provide unlimited candidate triggers. With the support of the lexical database, TFW has high efficiency and can be applied to large-scale knowledge collection. Finally, we obtain a total of 733,848 annotated sentences from New York Times (Sandhaus, 2008) 5890 S5 Troops were trying to break up stone-throwing protests, but not use live fire. Attack 0.023 S6+ A man was hacked to death by the criminal Labeled Data Unlabeled Data … Feature Encoder Event Prediction Feature Encoder … 0 0.125 0.25 0.375 0.5 NA Attack Move Kill … … 0 0.15 0.3 0.45 0.6 NA Attack Move Kill … … … … Teacher Student Supervised Loss KL-divergence Loss Guide the student S6 A man was hacked to death by the criminal Knowledge Collection Figure 2: The architecture of the proposed EKD model. Besides the supervised signals, EKD exploits abundant unlabeled data by ensuring the prediction consistency of raw sentence and knowledge-attending sentence. corpus in the first half of 2007. The total number of triggers is 2.65 million, with an average of 3.6 triggers per sentence. 3.3 Feature Extraction We adopt BERT to obtain the hidden representation for both labeled and unlabeled sentences. BERT is a pre-trained language representation model, and BERT has achieved SOTA performance on a wide range of tasks, such as question answering and language inference. The powerful capability of BERT has also been demonstrated in ED scenario (Wang et al., 2019a). Formally, given the raw sentence S and knowledge-attending sentence S+, we feed them into BERT respectively, and adopt the sequence output of the last layer as the hidden representation for each word in S and S+. H = BERT(S) H+ = BERT(S+) (2) 3.4 Event Prediction After obtaining the hidden representation of sentencen S, we adopt a full-connected layer to determine the event type Y for each word in sentence S. We use S(i) and Y(i) to denote the i-th training sentence and its event type in labeled corpus L. We first transform the hidden representation H obtained from Section 3.3 to a result vector O, where Oijc represents the probability that the j-th word in Si belongs to the c-th event class. And then we normalize O by the softmax function to obtain the conditional probability. p(Y(i)|S(i), θ) = n X j=1 exp(Oijc) PC c=1 exp(Oijc) /n (3) Given the labeled corpus L = {Si, Yi}|NL i=1, the optimization object is defined as: JL(θ) = − NL X i=1 log p(Y(i)|S(i), θ) (4) 3.5 Knowledge Distillation In this section, we distill open-domain trigger knowledge into our model. The main idea is to force the student model, with only raw texts as the input, to generate as good pseudo labels as the teacher model on both labeled and unlabeled data. Formally, given golden event type Y , the objective is: p(Y |S+θ) = p(Y |S−, θ) (5) where p(Y |S+θ) and p(Y |S−, θ) are the predictions from the teacher and student model respectively. We share the parameters of the teacher and student model. The input of teacher model S+ is aware of the open-domain trigger knowledge, and the input of student model S−does not know. We give the detailed construction process of S+ and S−below. Knowledge-attending Sentences (S+) We embed the open-domain trigger knowledge into the sentence by Marking Mechanism. Specifically, we introduce two symbols, named B-TRI and ETRI to mark the beginning and ending boundary of triggers identified by open-domain trigger knowledge. Formally, given the raw sentence S = {w1, w2, . . . , wi, . . . , wn} and trigger wi identified by open-domain trigger knowledge, the knowledge-attending sentence is S+ = 5891 {w1, w2, . . . ,B-TRI, wi,E-TRI, . . . , wn}. Marking mechanism works well for our feature extractor BERT (Soares et al., 2019), which is very flexible in embedding knowledge, and can be conveniently adapted to other types of knowledge without heavily-engineered work. Note that the newly added symbols are lack of pre-trained embedding in BERT. Random initialization undermines the semantic meaning of the introduced symbols, where B-TRI indicates the beginning of a trigger, and E-TRI means the ending. We address the issue by fine-tuning BERT on the annotation sentences in Section 3.2. Specifically, we adopt Masked LM task (Devlin et al., 2018) to exploit surrounding words to learn the semantic representation of the introduced symbols (B-TRI and E-TRI) based on the Harris distributional hypothesis (Harris, 1954). The mask word rate is set to 0.15 and the accuracy of masked words achieves 92.3% after fine-tune. Knowledge-absent Sentences (S−) To make the student model learn harder from the teacher model, we further disturb the input of student model by randomly masking out triggers identified by open-domain trigger knowledge. In this way, the student model has to judge the event type of trigger word solely based on the surrounding context. Formally, given the raw sentence S = {w1, w2, . . . , wi, . . . , wn} and trigger wi identified by open-domain trigger knowledge, the knowledge-absent sentence is S−= {w1, w2, . . . ,[MASK], . . . , wn}. The mask words are not randomly selected, but among triggers determined by open-domain trigger knowledge, avoiding the model is optimized only for the non-trigger negative class. KL-divergence Loss We move the added symbols to the end of the sentence to ensure strict alignment of words in S+ and S−, and then we minimize the discrepancy between conditional probability p(Y |S−, θ) and p(Y |S+θ) with KL-divergence loss. Given the collection of labeled and unlabeled corpus T = {(Sk)}NL+NU k=1 , the KL-divergence loss is: JT (θ) = KL(p(Y |S+, θ)||p(Y |S−, θ)) = NL+NU X k=1 p(Y(k)|S+ (k), θ) p(Y(k)|S+ (k), θ) p(Y(k)|S− (k), θ) (6) KL divergence is asymmetric in the two distributions. We treat predictions from knowledge-absent inputs as approximate distributions and predictions from knowledge-attending inputs as approximated distributions. If we reverse the direction of approximation, the experimental results decline significantly. The reason may be that we should ensure the low-confidence predictions approximate the high-confidence predictions. 3.6 Joint Training The final optimization objection is the integration of the supervised loss from labeled dataset and KLdivergence loss from unlabeled dataset defined in Equation 4 and 6. J(θ) = JL(θ) + λ ∗JT (θ) (7) We stop the gradient descent of teacher model when calculating JT to ensure that the learning is from teacher to student. Since unlabeled data is much larger than the labeled data, joint training leads the model quickly overfitting the limited labeled data while still underfitting the unlabeled data. To handle the issue, we adopt the Training Signal Annealing (TSA) technique proposed in (Xie et al., 2019) to linearly release the ‘training signals’ of the labeled examples as training progresses. 4 Experiment 4.1 Experiment Setup Datasets For the labeled corpus, we adopt dataset ACE2005 to evaluate the overall performance. ACE2005 contains 13,672 labeled sentences distributed in 599 articles. Besides the pre-defined 33 event types, we incorporate an extra ”Negative” event type for non-trigger words. Following (Chen et al., 2015), we split ACE2005 into 529/30/40 for train/dev/test respectively. Evaluation We report the Precision, Recall and micro-averaged F1 scores in the form of percentage over all 33 events. A trigger is considered correct if both its type and offsets match the annotation. Hyperparameters For feature extraction, we adopt BERT as our backbone, which has 24 16head attention layers and 1024 hidden embedding dimension. For the batch size, The batch size of labeled data is 32, and we set the proportion of labeled and unlabeled data to 1:6. For most of our experiments, we set the learning rate 3e-5, the maximum sequence length 128 and the λ in joint training 1. Our model trains on one V100 for a 5892 half day. The best result appears around 12,500 epochs. Balancing the performance and training efficiency, we actually use 40,236 unlabeled data for knowledge distillation unless otherwise stated. All reported results are the average results of ten runs. We use Adam as the gradient descent optimizer. Baselines As our methods incorporate opendomain trigger knowledge, for fair competition, we compare our methods with two data-driven methods and five state-of-the-art knowledge-enhanced methods, including: DMCNN proposes a dynamic multi-pooling layer above CNN model to improve event detection (Chen et al., 2015). DLRNN exploits document information via recurrent neural networks (Duan et al., 2017). ANN-S2 exploits argument information to improve ED via supervised attention mechanisms (Liu et al., 2017).GMLATT adopts a gated cross-lingual attention to exploit the complement information conveyed by multilingual data (Liu et al., 2018a). GCN-ED exploits structure dependency tree information via graph convolutions networks and entity mentionguided pooling (Nguyen and Grishman, 2018). Lu’s DISTILL proposes a -learning approach to distill generalization knowledge to handle overfitting (Lu et al., 2019). TS-DISTILL exploits the entity ground-truth and uses an adversarial imitation based knowledge distillation approach for ED (Liu et al., 2019). AD-DMBERT adopts an adversarial imitation model to expend more training data (Wang et al., 2019b). DRMM employs an alternative dual attention mechanism to effectively integrate image information into ED (Tong et al., 2020). The last two baselines both use BERT as feature extractor. 4.2 Overall Performance Table 2: Overall Performance on ACE2005 dataset (%). The results of baselines are adapted from their original papers. Method Precision Recall F1 DMCNN 75.6 63.6 69.1 DLRNN 77.2 64.9 70.5 ANN-S2 78.0 66.3 71.7 GMLATT 78.9 66.9 72.4 GCN-ED 77.9 68.8 73.1 Lu’s DISTILL 76.3 71.9 74.0 TS-DISTILL 76.8 72.9 74.8 AD-DMBERT 77.9 72.5 75.1 DRMM 77.9 74.8 76.3 EKD (Ours) 79.1 78.0 78.6 Table 2 presents the overall performance of the proposed approach on ACE2005. As shown in Table 2, EKD (our) outperforms various state-ofthe-art models, showing the superiority of opendomain trigger knowledge and the effectiveness of the proposed teacher-student model. BERT-based models AD-DMBERT, DRMM and EKD (ours) significantly outperform the CNN-based or LSTMbased models, which is due to the ability to capture contextual information as well as large scale pretraining of BERT. Compared to these BERT-based models, our methods consistently improves the F score by 3.5% and 2.3%, which shows the superiority of our method even if the encoder is powerful enough. Compared to data-driven methods DMCNN and DLRNN, knowledge enhanced methods Lu’s DISTILL, TS-DISTILL and EKD (ours) improve the recall by a large margin. Due to the small scale of ACE2005, it is quite tricky to disambiguate triggers solely based on the surrounding context. Enhanced by external knowledge, these methods have a stand-by commonsense to depend on, which prevents from overfitting densely labeled trigger words and thus can discover more trigger words. Among them, our model achieves the best performance, which may be caused by two reasons: 1) The superiority of open-domain trigger knowledge. Compared to general linguistic knowledge used in Lu’s DISTILL and entity type knowledge used in TS-DISTILL, open-domain trigger knowledge is more task-related, which directly provides trigger candidates for trigger identification, and thus is more informative. 2) The superiority of the proposed teacher-student model. Our method is able to learn open-domain trigger knowledge from unlimited unlabeled data, while Lu’s DISTILL and TS-DISTILL can only learn from labeled data. It is worth noting that our model simultaneously improves precision. Unseen/sparsely labeled trigger words are usually rare words, which are typically monosemous and exhibiting a single clearly defined meaning. These words are easier for the model to distinguish, thereby resulting in the improvement of the overall precision. To evaluate whether EKD has distilled knowledge into model, we report the performance of EKD in the test set with and without knowledge. As illustrated in Table 3, whether the input data masters the open-domain knowledge or not, the performance makes no big difference (78.4% vs 78.6%), which shows EKD (our) already distills 5893 the knowledge into the model. During testing, our model needs no more engineering work for knowledge collection. Table 3: Performance of test set with or without opendomain trigger knowledge Test Set P R F without knowledge 78.8 78.1 78.4 with knowledge 79.1 78.0 78.6 4.3 Domain Adaption Scenario We use ACE2005 to simulate a domain adaption scenario. ACE2005 is a multi-domain dataset, with six domains: broadcast conversation (bc), broadcast news (bn), telephone conversation (cts), newswire (nw), usenet (un) and webblogs (wl). Following the common practice (Plank and Moschitti, 2013; Nguyen and Grishman, 2014), we adopt the union of bc and nw as source domains, and bc, ct, wl as three target domains. The event types and vocabulary distribution are quite different between the source and target domains (Plank and Moschitti, 2013). For evaluation, we split source domain data into train/test 4:1 and report the average results on ten runs as the final result. For baselines, MaxEnt and Joint (Li et al., 2013) are two feature-enriched methods, exploiting both lexical and global features to enhance the domain adaption ability. Nguyen’s CNN (Nguyen and Grishman, 2015) integrates the feature and neural approaches and proposes a joint CNN for domain adaption. We also compare with supervised SOTA PLMEE (Yang et al., 2019), which exploits the pre-trained language model BERT for event extraction. As illustrated in Table 4, our method achieves the best adaptation performance on both bc and wl target domains and achieve comparable performance on cts target domain. The superior of domain adaption may come from the open-domain trigger knowledge. The open-domain trigger knowledge is not subject to specific domains, which will detect all the event-oriented trigger words and cover the event type from both the source and the target domains. Armed with open-domain trigger knowledge, our model reinforces associations between source and target data, and thus has superior performance in domain adaption. 4.4 Various Labeling Frequencies In the section, we answer the question whether our model can address the long tail problem. According to the frequency in the training set, we divide trigger words into three categories: Unseen, Sparsely-Labeled and Densely-Labeled. The frequency of Sparsely-Labeled is less than 5 and the frequency of Densely-Labeled is more than 30. The baselines are 1) supervised-only method DMBERT (Chen et al., 2015), 2) distant-supervised method DGBERT (Chen et al., 2017) and 3) semisupervised method BOOTSTRAP (He and Sun, 2017). We replace the encoders in the three baselines to more powerful BERT to make the baseline stronger. As illustrated in Table 5, all the three baselines show a significant performance degradation in unseen/sparsely labeled scenarios due to the limited training data. Our method surpasses the baselines in all three settings. Especially, our method gains more improvement on unseen (+6.1%) and sparsely-labeled settings (+2.8%). Open-domain trigger knowledge allows us to discover unseen/sparsely triggers from the large-scale unlabeled corpus, which increases the frequency at which the model sees unseen/sparsely triggers. 4.5 Knowledge-Agnostic Then, to evaluate whether EKD (ours) can distill other knowledge types, we conduct experiments on the three most commonly used knowledge in ED scenario: 1) Entity knowledge. Entity type is an important feature for trigger disambiguation in ED (Zhang et al., 2007). We compare with (Liu et al., 2019), which distills ground-truth entity type knowledge via an adversarial teacher-student model. 2) Syntactic knowledge. Syntactic knowledge is implied in the dependency parse tree. The closer in tree, the more important of the word for the trigger (McClosky et al., 2011). Our baseline (Nguyen and Grishman, 2018) is the best syntactic knowledge enhanced model, which exploits structure dependency tree information via graph convolutions networks. 3) Argument knowledge. Event arguments play an important role in ED. Our baseline ANN-S2 (Liu et al., 2017) designs a supervised attention to leverage the event argument knowledge. For the adaption of our model, we obtain entity annotations by Stanford CoreNLP, syntactic by NLP-Cube(Boro et al., 2018) and argument by CAMR (Wang et al., 2015). The marking contents are: 1) For entity, we tag three basic entity types, People, Location and Organization. 2) For 5894 Table 4: Performance on domain adaption. We train our model on two source domains bn and nw, and test our model on three target domains bc, cts and wl. Methods In-Domain (bn+nw) bc cts wl P R F P R F P R F P R F MaxEnt 74.5 59.4 66.0 70.1 54.5 61.3 66.4 49.9 56.9 59.4 34.9 43.9 Joint 73.5 62.7 67.7 70.3 57.2 63.1 64.9 50.8 57.0 59.5 38.4 46.7 Nguyen’s CNN 69.2 67.0 68.0 70.2 65.2 67.6 68.3 58.2 62.8 54.8 42.0 47.5 PLMEE 77.1 65.7 70.1 72.9 67.1 69.9 70.8 64.0 67.2 62.6 51.9 56.7 EKD (ours) 77.8 76.1 76.9 80.8 65.1 72.1 71.7 61.3 66.1 69.0 49.9 57.9 Table 5: Performance of our method on various labeling frequencies trigger words. Methods Unseen Sparsely Labeled Densely Labeled P R F P R F P R F DMBERTsupervised−only 66.7 45.9 54.4 74.4 70.7 72.5 84.8 83.5 84.1 DGBERTdistant−supervised 76.5 42.6 54.7 75.7 70.1 72.8 85.9 83.8 84.3 BOOTSTRAPsemi−supervised 73.7 45.9 56.6 76.0 71.3 73.6 90.6 83.5 86.9 EKD (ours) 79.0 52.0 62.7 80.8 72.4 76.4 92.5 82.2 87.1 syntactic, we take the first-order neighbor of trigger word on dependency parse tree. We consider neighbors in both directions. 3) For argument, we focus on the words played as the ARG0-4 roles of the trigger in AMR parser following (Huang et al., 2017). As we do not know trigger words on unlabeled data, we use pseudo labels generated by pre-trained BERT instead. We encode the entity, syntactic and argument knowledge into sentences with the same Marking Mechanism in Section 3.2. To prevent information leakage, we only use that knowledge in the training procedure. As illustrated in Table 7, Our three adaption models, EKD-Ent, EKD-Syn and EKD-Arg, consistently outperform baselines on the F score, proving that the effectiveness of EKD is independent to specific knowledge type. EKD increases the cognitive gap between teacher model and student model to maximize knowledge utilization, and the idea universally works for all types of knowledge distillation. If we compare the performances from the perspective of knowledge type, the results show that open-domain trigger knowledge (EKD) is better than the argument knowledge (EKD-Arg), and they are both superior to the entity knowledge (EKDEnt) and syntactic knowledge (EKD-Syn). The reason might be the more task-related of the knowledge, the more informative of the knowledge. Since open-domain trigger knowledge and event argument knowledge consider the important words directly from the event sides, they are more valuable than the entity and syntactic knowledge in ED. 4.6 Case Study We answer the question of how and when the open-domain trigger knowledge enhances the understanding of event triggers. Table 6 gives examples about how open-domain trigger knowledge affects predictions of ED. In S1, since trek is a rare word that never shows up in the training procedure, supervised-only method fails to recognize it. Opendomain trigger knowledge provides the priory that trek should be an event trigger. Coupled with pretrained information that trek is similar to denselylabeled trigger words such as move, our model successfully recalls it. In S3, be is a very ambiguous word, and in most cases, be is not used as a trigger word in the labeled data. Supervised-only method is prone to overfitting the labeled data and fails to recognize it. Open-domain trigger knowledge owns word sense disambiguation ability, which knows that be here belongs to the word sense ‘occupy a certain position’ instead of the common word sense ‘have the quality of being’, and thus can successfully identify be as the trigger for event Start-Position. 5 Conclusion We leverage the wealth of the open-domain trigger knowledge to address the long-tail issue in ACE2005. Specifically, we adopt a WordNet-based pipeline for efficient knowledge collection, and then we propose a teacher-student model, EKD, to distill open-domain trigger knowledge from both labeled and abundant unlabeled data. EKD forces the student model to learn open-domain trigger knowledge from teacher model by mimicking the 5895 Table 6: Error analysis: How and When does the open-domain trigger knowledge improve ED? GT refers to the ground truth labels. On the unlabeled data, we use a majority vote of three humans as the ground truth. Sentence GT Prediction S S+ S1: Mr. Caste leaves at 5 A.M. for a train trek to manhatten and does not return utill 6 P.M. Transport O Transport S2: Militants in the region escalate their attacks in the weeks leading up to the inauguration of Nigeria’s president. Start-Position O Start-Position S3: Mr.Mason, who will be president of CBS radio, said that it would play to radio’s strengths in delivering local news. Start-Position O Start-Position Table 7: Knowledge-Agnostic. Knowledge Type Methods Metrics P R F Entity TS-DISTILL 76.8 72.9 74.8 EKD-Ent 74.5 78.6 76.5 improvement -2.3 +4.7 +1.7 Syntactic GCN-ED 77.9 68.8 73.1 EKD-Syn 76.5 76.3 76.4 improvement -1.4 +7.5 +3.3 Argument ANN-S2 78.0 66.3 71.7 EKD-Arg 75.8 78.4 77.1 improvement -2.2 +23.1 +5.4 predicted results of the teacher model. Experiments show that our method surpasses seven strong knowledge-enhanced baselines, and is especially efficient for unseen/sparsely triggers identification. 6 Acknowledgments This work is supported by the National Key Research and Development Program of China (2018YFB1005100 and 2018YFB1005101), NSFC Key Projects (U1736204, 61533018). It also got partial support from National Engineering Laboratory for Cyberlearning and Intelligent Technology, and Beijing Key Lab of Networked Multimedia. This research is supported by the National Research Foundation, Singapore under its International Research Centres in Singapore Funding Initiative. References Jun Araki and Teruko Mitamura. 2018. Open-domain event detection using distant supervision. In Proceedings of the 27th International Conference on Computational Linguistics, pages 878–891. Tiberiu Boro, Stefan Daniel Dumitrescu, and Ruxandra Burtica. 2018. NLP-cube: End-to-end raw text processing with neural networks. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 171–179. Yixin Cao, Lei Hou, Juanzi Li, and Zhiyuan Liu. 2018. Neural collective entity linking. In COLING. Yixin Cao, Zikun Hu, Tat-seng Chua, Zhiyuan Liu, and Heng Ji. 2019. Low-resource name tagging learned with weakly labeled data. In EMNLP. Yubo Chen, Shulin Liu, Xiang Zhang, Kang Liu, and Jun Zhao. 2017. Automatically labeled data generation for large scale event extraction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 409–419. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multipooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), volume 1, pages 167–176. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Shaoyang Duan, Ruifang He, and Wenli Zhao. 2017. Exploiting document level information to improve event detection via recurrent neural networks. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 352–361. Xiaocheng Feng, Bing Qin, and Ting Liu. 2018. A language-independent neural network for event detection. Science China Information Sciences, 61(9):092106. James Ferguson, Colin Lockard, Daniel Weld, and Hannaneh Hajishirzi. 2018. Semi-supervised event extraction with paraphrase clusters. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 359–364, New Orleans, Louisiana. Association for Computational Linguistics. Ruan Gabbard, Jay DeYoung, and Marjorie Freedman. 2018. Events beyond ace: Curated training for events. arXiv preprint arXiv:1809.05576. Chen Gong, Xiaojun Chang, Meng Fang, and Jian Yang. 2018. Teaching semi-supervised classifier via generalized distillation. In IJCAI, pages 2156–2162. 5896 Zellig S. Harris. 1954. Distributional structure. ¡i¿WORD¡/i¿, 10(2-3):146–162. Hangfeng He and Xu Sun. 2017. A unified model for cross-domain and semi-supervised named entity recognition in chinese social media. In Thirty-First AAAI Conference on Artificial Intelligence. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos qa: Machine reading comprehension with contextual commonsense reasoning. arXiv preprint arXiv:1909.00277. Lifu Huang, Heng Ji, Kyunghyun Cho, and Clare R Voss. 2017. Zero-shot transfer learning for event extraction. arXiv preprint arXiv:1707.01066. Mingkun Huang, Yongbin You, Zhehuai Chen, Yanmin Qian, and Kai Yu. 2018. Knowledge distillation for sequence model. In Interspeech, pages 3703–3707. Alex Judea and Michael Strube. 2016. Incremental global event extraction. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2279– 2289. Samuli Laine and Timo Aila. 2016. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242. Seung Hyun Lee, Dae Ha Kim, and Byung Cheol Song. 2018. Self-supervised knowledge distillation using singular value decomposition. In European Conference on Computer Vision, pages 339–354. Springer. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 73–82. Wei Li, Dezhi Cheng, Lei He, Yuanzhuo Wang, and Xiaolong Jin. 2019. Joint event extraction based on hierarchical event schemas from framenet. IEEE Access, 7:25001–25015. Shasha Liao and Ralph Grishman. 2010a. Filtered ranking for bootstrapping in event extraction. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 680–688. Association for Computational Linguistics. Shasha Liao and Ralph Grishman. 2010b. Using document level cross-event inference to improve event extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL ’10, pages 789–797, Stroudsburg, PA, USA. Association for Computational Linguistics. Jian Liu, Yubo Chen, and Kang Liu. 2019. Exploiting the ground-truth: An adversarial imitation based knowledge distillation approach for event detection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6754–6761. Jian Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2018a. Event detection via gated multilingual attention mechanism. Statistics, 1000:1250. Shaobo Liu, Rui Cheng, Xiaoming Yu, and Xueqi Cheng. 2018b. Exploiting contextual information via dynamic memory network for event detection. arXiv preprint arXiv:1810.03449. Shulin Liu, Yubo Chen, Kang Liu, and Jun Zhao. 2017. Exploiting argument information to improve event detection via supervised attention mechanisms. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1789–1798. Yaojie Lu, Hongyu Lin, Xianpei Han, and Le Sun. 2019. Distilling discrimination and generalization knowledge for event detection via deltarepresentation learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4366–4376. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60. David McClosky, Mihai Surdeanu, and Christopher D Manning. 2011. Event extraction as dependency parsing. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 1626–1635. Association for Computational Linguistics. George A Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J Miller. 1990. Introduction to wordnet: An on-line lexical database. International journal of lexicography, 3(4):235– 244. Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 300–309. Thien Huu Nguyen and Ralph Grishman. 2014. Employing word representations and regularization for domain adaptation of relation extraction. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 68–74. 5897 Thien Huu Nguyen and Ralph Grishman. 2015. Event detection and domain adaptation with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 365–371, Beijing, China. Association for Computational Linguistics. Thien Huu Nguyen and Ralph Grishman. 2018. Graph convolutional networks with argument-aware pooling for event detection. In Thirty-Second AAAI Conference on Artificial Intelligence. Barbara Plank and Alessandro Moschitti. 2013. Embedding semantic similarity in tree kernels for domain adaptation of relation extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1498–1507. Sebastian Ruder and Barbara Plank. 2018. Strong baselines for neural semi-supervised learning under domain shift. arXiv preprint arXiv:1804.09530. Kuniaki Saito, Yoshitaka Ushiku, and Tatsuya Harada. 2017. Asymmetric tri-training for unsupervised domain adaptation. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2988–2997. JMLR. org. Evan Sandhaus. 2008. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752. Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. arXiv preprint arXiv:1906.03158. Antti Tarvainen and Harri Valpola. 2017. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in neural information processing systems, pages 1195–1204. Meihan Tong, Shuai Wang, Yixin Cao, Bin Xu, Juaizi Li, Lei Hou, and Tat-Seng Chua. 2020. Image enhanced event detection in news articles. Chuan Wang, Nianwen Xue, and Sameer Pradhan. 2015. A transition-based algorithm for AMR parsing. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 366–375, Denver, Colorado. Association for Computational Linguistics. Xiaozhi Wang, Xu Han, Zhiyuan Liu, Maosong Sun, and Peng Li. 2019a. Adversarial training for weakly supervised event detection. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 998–1008. Xiaozhi Wang, Xu Han, Zhiyuan Liu, Maosong Sun, and Peng Li. 2019b. Adversarial training for weakly supervised event detection. In NAACL. Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. 2019. Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848. Hang Yang, Yubo Chen, Kang Liu, Yang Xiao, and Jun Zhao. 2018. Dcfee: A document-level chinese financial event extraction system based on automatically labeled training data. In Proceedings of ACL 2018, System Demonstrations, pages 50–55. Sen Yang, Dawei Feng, Linbo Qiao, Zhigang Kan, and Dongsheng Li. 2019. Exploring pre-trained language models for event extraction and generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5284– 5294. Ying Zeng, Yansong Feng, Rong Ma, Zheng Wang, Rui Yan, Chongde Shi, and Dongyan Zhao. 2018. Scale up event extraction learning via automatic training data generation. In Thirty-Second AAAI Conference on Artificial Intelligence. Kuo Zhang, Juan Zi, and Li Gang Wu. 2007. New event detection based on indexing-tree and named entity. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 215–222. ACM. Tongtao Zhang, Heng Ji, and Avirup Sil. 2019. Joint entity and event extraction with generative adversarial imitation learning. Data Intelligence, 1(2):99– 120. Yue Zhao, Xiaolong Jin, Yuanzhuo Wang, and Xueqi Cheng. 2018. Document embedding enhanced event detection with hierarchical and supervised attention. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 414–419. Zhi Zhong and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In Proceedings of the ACL 2010 System Demonstrations, pages 78–83, Uppsala, Sweden. Association for Computational Linguistics.
2020
522
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5898–5905 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5898 Improving Low-Resource Named Entity Recognition using Joint Sentence and Token Labeling Canasai Kruengkrai Thien Hai Nguyen Sharifah Mahani Aljunied Lidong Bing DAMO Academy, Alibaba Group [email protected], {thienhai.nguyen,mahani.aljunied,l.bing}@alibaba-inc.com Abstract Exploiting sentence-level labels, which are easy to obtain, is one of the plausible methods to improve low-resource named entity recognition (NER), where token-level labels are costly to annotate. Current models for jointly learning sentence and token labeling are limited to binary classification. We present a joint model that supports multi-class classification and introduce a simple variant of self-attention that allows the model to learn scaling factors. Our model produces 3.78%, 4.20%, 2.08% improvements in F1 over the BiLSTM-CRF baseline on e-commerce product titles in three different low-resource languages: Vietnamese, Thai, and Indonesian, respectively. 1 Introduction Neural named entity recognition (NER) has become a mainstream approach due to its superior performance (Huang et al., 2015; Lample et al., 2016; Ma and Hovy, 2016; Chiu and Nichols, 2016; Akbik et al., 2018). However, neural NER typically requires a large amount of manually labeled training data, which are not always available in low-resource languages. Training neural NER with limited labeled data can be very challenging. In this paper, we consider bridging multi-task learning (MTL) (Caruana, 1993; Ruder, 2017) and pretraining (Peters et al., 2018; Devlin et al., 2019) to leverage training signals of an auxiliary task that has a sufficiently large number of labeled data. Researchers have investigated a wide variety of auxiliary tasks and resources to boost the performance of neural NER, e.g., training coarsegrained NER (Aguilar et al., 2017), fine-tuning bilingual word embeddings (Wang et al., 2017), applying language models (Rei, 2017), integrating part-of-speech (POS) tagging (Lin et al., 2018), using cross-lingual knowledge (Feng et al., 2018), and learning paraphrases (Watanabe et al., 2019). Category: HEALTH_BEAUTY Title: COMBO Gôm xịt tóc Tigi Bed Head Label: O B-PRODUCT I-PRODUCT E-PRODUCT B-BRAND I-BRAND E-BRAND Translation: combo hairspray Tigi Bed Head ‘‘… Tigi Bed Head hairspray combo …’’ Category: ELECTRONICS Title: Ốp lưng silicon dẻo Hàn Quốc Label: B-PRODUCT E-PRODUCT S-MATERIAL S-PATTERN O O Translation: case silicon flexible Korea ‘‘… Korean flexible silicon case …’’ Figure 1: Examples of product titles with NER annotation in Vietnamese. Product categories are provided by sellers and can be used as sentence-level labels. While most of the previous studies have exploited token-level information from auxiliary tasks, a few of them have tried to use sentence-level information (Rei and Søgaard, 2018; Devlin et al., 2019). Our work is closely related to the joint labeling framework in Rei and Søgaard (2019). However, they only focused on binary classification, while we attempt to handle multi-class classification on both sentence and token levels. In this work, we focus on improving lowresource NER by exploiting large data, only having sentence-level labels. Figure 1 shows examples of product titles on an e-commerce website in Vietnamese. While the product titles with NER annotation done by our annotators are limited, those with product categories (e.g., ELECTRONICS) labeled by sellers are abundant, which can be used to train a sentence-level classifier.1 A key challenge is to pass useful training signals from the sentence-level classification to the token-level NER. Our contributions are as follows. We present the joint sentence and token labeling framework that enables multi-class classification equipped with a pre-training strategy (§2.1). We show that the current attention mechanisms can produce suboptimal 1The sellers are required to assign a category when uploading the product, but such input could be noisy as well. 5899 s Max-pooling ℒNER ℒJOINT = + ℒC λ B-PRODUCT O E-PRODUCT S-MATERIAL S-PATTERN ELECTRONICS Shared layers Word embeddings Projection layer BiLSTM layer Final hidden representation CRF layer Attention layer Task-specific layers Linear layer 𝑤! 𝑤" 𝑤# 𝑤$ 𝑤% e! e" e# e$ e% x! x" x# x$ x% h! h" h# h$ h% h!& h"& h#& h$& h% & 𝑦! 𝑦" 𝑦# 𝑦$ 𝑦% Figure 2: Architecture of our joint sentence and token labeling model. The attention layer is optional, which can be skipped or replaced with the desired approach. results and propose a simple approach that allows the model to learn scaling factors to obtain a proper attention distribution (§2.2). Results on product title texts indicate that the proposed method is effective for low-resource NER across three different languages: Vietnamese, Thai, and Indonesian. 2 Proposed method Figure 2 shows the architecture of our joint sentence and token labeling model. Our model is based on hard parameter sharing (Ruder, 2017) in which the hidden layers are shared between two tasks. The task-specific layers include a conditional random field (CRF) layer for NER and a linear layer for sentence classification.2 Unlike the standard MTL, which trains multiple tasks at once and expects the model to perform well on all tasks (Hashimoto et al., 2017; Rei and Søgaard, 2019), the goal of our work is to improve the performance of the main task (NER) using the auxiliary task (sentence classification) for creating pre-trained representations and as a regularizer. 2.1 Joint learning framework for multi-class classification Shared layers Let w1, . . . , wT be an input token sequence, where wt denotes the t-th token in the sequence. We represent each wt using a pre-trained word embedding et ∈Rde, where de is the dimensionality of word embeddings. We do not fine-tune word embeddings but project them into a new space 2We use the term “sentence” to conform with the literature, although our data are not always complete sentences. using xt = W1et, where W1 ∈Rde×de is a trainable weight matrix. We then feed the projected embedding sequence X = [x1, . . . , xT ] ∈RT×de to a bidirectional long short-term memory (BiLSTM) layer to obtain a forward hidden state sequence → H = [ → h1, . . . , → hT ] ∈RT× dh 2 and a backward hidden state sequence ← H = [ ← h1, . . . , ← hT ] ∈RT× dh 2 , where dh is the number of hidden units. We concatenate the hidden states of both directions to obtain the final hidden representation H = [h1, . . . , hT ] ∈RT×dh, where ht = concat( → ht, ← ht) ∈Rdh. We can either use H for both the sentence classification and NER tasks directly or apply an attention mechanism on it to help the model focus on particular tokens (detailed in §2.2). Sentence classification We create a fixed size vector by applying max-pooling (Collobert et al., 2011; Conneau et al., 2017) over H, which encourages the model to capture the most useful local features encoded in the hidden states. We feed the fixed size global feature vector to a linear layer to obtain the unnormalized predicted scores for each class. Let K be the number of target classes, sk be the k-th normalized predicted score after applying a softmax function, and t ∈RK be the one-hot encoded true label. To train the sentence classification model, we minimize the multi-class cross-entropy loss: LC = −1 N N X i=1 K X k=1 t(i) k log(s(i) k ), (1) where i denotes the sentence index, and N is the number of training examples. We not only train the sentence classification and NER models jointly but also pre-train the sentence classification model using a sufficiently large number of training examples with sentence-level labels only. We expect that pre-trained hidden representations would help the model generalize better on our main task, as described below. NER Following Huang et al. (2015); Lample et al. (2016), we feed H to a CRF layer to obtain the probability of a label sequence y. To train the NER model, we minimize the negative log-likelihood of the correct label sequences over the training set: LNER = −1 N N X i=1 logp(y(i)|H(i)). (2) 5900 Joint labeling objective Combining Eqs. (1) and (2), we obtain: LJOINT = LNER + λLC, (3) where λ is the balancing parameter. The LC acts as a regularization term, which helps in reducing the risk of overfitting on our main task. 2.2 Revisiting attention mechanisms We first consider a soft-attention mechanism (Shen and Lee, 2016), which is used in Rei and Søgaard (2018, 2019). This method is computationally efficient because the attention distribution a ∈RT over tokens in a sentence is computed from the final hidden representation without considering relationships between hidden states. Specifically, the new final representation H′ ∈RT×dh can be derived as follows: H′ = H + H ⊗a, a = ˜a PT j=1 ˜aj , ˜a = σ(w2g + b2), g = tanh(W3H⊤+ b3), (4) where w2 ∈Rdh, b2 ∈R, W3 ∈Rdh×dh, b3 ∈Rdh are trainable parameters, and ⊗denotes the column-wise matrix-vector multiplication. We use a residual connection (He et al., 2016) between the input hidden representation and the attention output as shown in Figure 2. H′ can be fed to NER and sentence classification. We further explore attention mechanisms that take into account the relationships between hidden states. In particular, we apply the multi-head self-attention mechanism in Transformer (Vaswani et al., 2017), which has shown promising results in many applications (Radford et al., 2018; Devlin et al., 2019). We replace Eq. (4) with: H′ = H + concat(head1, . . . , headn)WO, headj = attention(Qj, Kj, Vj), Qj, Kj, Vj = HWQ j , HWK j , HWV j , (5) where WQ j , WK j , WV j ∈Rdh× dh n ; WO ∈Rdh×dh are trainable parameters, and n is the number of parallel heads. The attention function can be computed by: attention(Q, K, V) = softmax(QK⊤ α )V. (6) We drop the head index j for simplicity and introduce the scaling factor α ∈R. When setting α = p dh/n, Eq. (6) falls back to the standard scaled dot-product attention in Transformer. Yan et al. (2019) observed that the scaled dot-product attention produces poor results for NER and proposed the un-scaled dot-product attention, where α = 1. In this work, we consider α as the softmax temperature (Hinton et al., 2015) that allows adjusting the probability distribution of a softmax output. Using a higher temperature yields a softer attention distribution. However, a sharper attention distribution might be more suitable for NER because only a few tokens in the sentence are named entities. Instead of setting α to 1 or p dh/n, we propose to learn the scaling factors δ ∈RT for each token. We modify Eq. (6) with: attention(Q, K, V) = softmax(QK⊤ δ )V, δ = min(ReLU(w4H⊤+ b4), p dh/n) + 1, (7) where w4 ∈Rdh, b4 ∈R are the trainable parameters. Since the ReLU activation function produces output values in the range [0, ∞), the t-th element of δ is bounded in the range [1, 1 + p dh/n]. This allows the model to dynamically adapt δ without increasing much computational cost. 3 Experiments 3.1 Datasets The data used in our experiments are product titles obtained from major e-commerce websites in Southeast Asian countries during May-June, 2019. They cover three languages, including Vietnamese (VI), Thai (TH), and Indonesian (ID). A product title is a brief, information-rich description (less than 200 characters) written by the sellers. We hired annotators and linguists for each language to annotate the product titles based on our definitions and annotation guidelines. After the annotation process, we obtained 2,000 product titles per language labeled with 6 product attribute NER tags, including PRODUCT, BRAND, CONSUMER_GROUP, MATERIAL, PATTERN, and 5901 COLOR. For each language, we split the data into 1,000/500/500 – training/development/test sets.3 The statistics of NER tags can be found in Table 3 (see Appendix A). For some NER tags, especially PRODUCT, the number of tags is much larger than the number of examples used. One reason is that the sellers writing a product title tend to include multiple different expressions referring to the same entity (near-synonyms), with the likely intention of acquiring more hits from potential customers. Using English to illustrate: “Genuine Leather Sling Bag Crossbody Bag Messenger bag for Men Women Office Laptop”, the underlined elements are 3 PRODUCT and 2 CONSUMER_GROUP entities. The other reason is that in one product title, it is common to find repeated identical expressions in the same language, as well as the same entity words appearing in English. Using a VI example to illustrate: “T-Shirt - Áo thun in phản quang Ao thun Nam - Ao thun nữ- Áo thun phong cách Nam Nữ”, the underlined elements refer to the same product (t-shirt), appearing multiple times in VI and in English. 3.2 Training details We implement our model on top of the Flair framework (Akbik et al., 2019), which has recently achieved state-of-the-art results in various sequence labeling tasks. Following Lample et al. (2016), we use the IOBES tagging scheme. We use the pretrained word embeddings of fastText4 (Bojanowski et al., 2016) with de = 300 dimensions for each language and a single-layer BiLSTM with dh = 512 hidden units. We apply a locked dropout (Merity et al., 2018) with the probability of 0.5 before and after the BiLSTM layer and to the attention output before the residual connection. For the multi-head self-attention layer, we adapt the implementation of “The Annotated Transformer” (Rush, 2018)5 and use its default hyperparameters. We train all models using Adam (Kingma and Ba, 2015) with the batch size of 32, the learning rate of 1e-3, and the gradient clipping of 5. We initialize all model parameters by sampling from U(−0.1, 0.1). We set λ in Eq. (3) to 1. We use the same parameter setting for all languages. We apply early stopping in which the learning rate decays by 3For TH, 941 training examples remain after removing annotation errors. 4https://fasttext.cc/docs/en/crawl-vectors.html 5https://nlp.seas.harvard.edu/2018/04/03/attention.html 0.5 if the F1 score on the NER development set does not improve 3 times. We train until the learning rate drops below 1e-5, or the training epochs reach 100. 3.3 Pre-trained classification models We collect unannotated product titles for each language and group them into 6 main categories, including FASHION, HEALTH_BEAUTY, ELECTRONICS, HOME_FURNITURE, MOTORS, and OTHER. Since the number of product titles is different from one language to another, we can create 360k/30k, 1.2M/60k, 864k/60k – training/development sets for VI, TH, and ID, respectively. Since product titles are not segmented in TH, we segment them using a character cluster-based method simplified from the hybrid model of Kruengkrai et al. (2009). We implement our word segmenter based on CRFsuite (Okazaki, 2007) and train the model using the BEST corpus (Kosawat et al., 2009). We pre-train the classification models for each language. Since our batch size is relatively small compared to the training data size, we find it suffices to train for 2 epochs. The F1 scores on the development sets are 90.08%, 89.79%, and 91.91% for VI, TH, and ID, respectively. The pre-trained model parameters are used to initialize the projection and BiLSTM layers. 3.4 Main results We run each experiment 10 times using different random seeds and report the average F1 score. All experiments are run on NVIDIA Tesla P100 GPUs. Table 1 shows the results of various models on the test sets. The Joint models consistently show improvements over the NER-only models, while the Joint + Pre-trained models further boost the F1 scores. These results suggest that the proposed framework is effective for all three languages. The Joint + Pre-trained model with the Self + Learned attention mechanism achieves the best F1 scores at 62.16%, 61.54%, and 76.10% (i.e., 3.78%, 4.20%, and 2.08% improvements over the NER-only baselines) for VI, TH, and ID, respectively. In addition, we experiment using simple data augmentation. The “+10k” and “+50k” rows in Table 1 indicate the number of additional training examples automatically labeled using a dictionary created from the training set. We do not observe any improvement in both the development and test 5902 Model Attention VI TH ID NER-only (+10k) – 53.47 52.47 74.22 NER-only (+50k) – 51.12 50.35 71.60 NER-only – 58.38 57.34 74.02 Soft 58.18 57.49 74.20 Self + Scaled 58.82 57.80 74.55 Self + Un-scaled 59.68 58.53 75.24 Self + Learned 60.18 58.63 74.83 Joint – 59.47 58.81 74.67 Soft 59.50 58.82 74.88 Self + Scaled 59.34 58.46 75.03 Self + Un-scaled 60.58 59.56 75.66 Self + Learned 60.25 59.35 75.18 Joint + Pre-trained – 61.26 60.27 75.86 Soft 61.05 60.50 75.80 Self + Scaled 61.80 61.32 75.90 Self + Un-scaled 62.09 61.45 76.01 Self + Learned 62.16 61.54 76.10 Table 1: F1 scores on the test sets. NER-only = baseline BiLSTM-CRF; Joint = joint labeling model; Joint + Pre-trained = Joint initialized with the pre-trained classification model; Soft = soft-attention (Shen and Lee, 2016; Rei and Søgaard, 2019); Self = multi-head self-attention described in §2.2, where Scaled = scaled dot-product (Vaswani et al., 2017), Un-scaled = unscaled dot-product (Yan et al., 2019), and Learned = our learned scaling factors. Model VI TH ID Joint + Pre-trained & Self + Learned 62.16 61.54 76.10 w/o residual connection 61.28 61.52 75.74 w/o locked dropout 61.87 61.08 76.22 Table 2: Model ablations for our best configuration, the Joint + Pre-trained model with the Self + Learned attention mechanism. results and hence do not pursue this idea further with the attention mechanisms. Table 2 shows the model ablations for our best configuration, the Joint + Pre-trained model with the Self + Learned attention mechanism. Feeding the attention output to the CRF layer without the residual connection leads to a consistent drop in the F1 scores, although it shows a less pronounced effect on TH. The results indicate that the residual connection is a useful component in our architecture. Adding the attention output to the hidden representation without applying the locked dropout (i.e., setting the dropout probability to 0) hurts the F1 scores on VI and TH but shows an improvement on ID, suggesting that fine-tuning the dropout rate could help boost the F1 scores. 3.5 Discussion Our Self + Learned scaling approach shows the competitive results for the NER-only model and achieves the best results when training in tandem with the Joint + Pre-trained model. The Soft attention mechanism (Shen and Lee, 2016; Rei and Søgaard, 2019) shows slight or no improvements, suggesting that considering relationships between hidden states when computing the attention distribution is crucial for the NER task. The Self + Unscaled approach (Yan et al., 2019) yields better F1 scores than the Self + Scaled approach (Vaswani et al., 2017) for all configurations, suggesting that a sharper attention distribution is helpful for the NER task. Although VI, TH, and ID are used in Southeast Asia, they do not belong to the same language family and have different writing systems and scripts (i.e., VI = Austroasiatic; TH = Kra-Dai; ID = Austronesian). Handling these three languages without much engineering effort reflects the generalizability of our method. Furthermore, we examine whether our method still provides improvements, even if the NER training data size increases. We create an additional set of 2k labeled examples for VI and add them to the training set (3k in total). The baseline NER-only produces 66.81% F1, while Joint + Pre-trained with Self + Learned achieves 69.26% F1 (i.e., 2.45% improvement). 4 Conclusion We have shown that the proposed joint sentence and token labeling model is remarkably effective for low-resource NER in three different languages: Vietnamese, Thai, and Indonesian. Our model supports multi-class classification where the sentence and token labels can be weakly related, which indicates the potential of our model for many other real-world applications. Using a larger amount of general domain texts to build pre-trained representations (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019; Clark et al., 2020) can complement with our model and is one of the directions that we plan to take in future work. Acknowledgments We thank the anonymous reviewers for their constructive comments. Kruengkrai is grateful for support from National Institute of Informatics, Japan. 5903 References Gustavo Aguilar, Suraj Maharjan, Adrian Pastor LópezMonroy, and Thamar Solorio. 2017. A multi-task approach for named entity recognition in social media data. In Proceedings of ACL Workshop on Noisy User-generated Text, pages 148–153. Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. FLAIR: An easy-to-use framework for state-of-theart NLP. In Proceedings of NAACL (Demonstrations), pages 54–59. Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of COLING, pages 1638– 1649. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv:1607.04606. Richard Caruana. 1993. Multitask learning: A knowledge-based source of inductive bias. In Proceedings of ICML, pages 41–48. Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, pages 357–370. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pretraining text encoders as discriminators rather than generators. In Proceedings of ICLR. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., pages 2493–2537. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of EMNLP, pages 670–680. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL, pages 4171– 4186. Xiaocheng Feng, Xiachong Feng, Bing Qin, Zhangyin Feng, and Ting Liu. 2018. Improving low resource named entity recognition using cross-lingual knowledge transfer. In Proceedings of IJCAI, pages 4071– 4077. Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple NLP tasks. In Proceedings of EMNLP, pages 1923–1933. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of CVPR, pages 770–778. Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. In Proceedings of NIPS Deep Learning and Representation Learning Workshop. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv:1508.01991. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR. Krit Kosawat, Monthika Boriboon, Patcharika Chootrakool, Ananlada Chotimongkol, Supon Klaithin, Sarawoot Kongyoung, Kanyanut Kriengket, Sitthaa Phaholphinyo, Sumonmas Purodakananda, Tipraporn Thanakulwarapas, and Chai Wutiwiwatchai. 2009. Best 2009 : Thai word segmentation software contest. In Proceedings of International Symposium on Natural Language Processing (SNLP), pages 83–88. Canasai Kruengkrai, Kiyotaka Uchimoto, Jun’ichi Kazama, Kentaro Torisawa, Hitoshi Isahara, and Chuleerat Jaruskulchai. 2009. A word and character-cluster hybrid model for thai word segmentation. In Proceedings of InterBEST. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL, pages 260–270. Ying Lin, Shengqi Yang, Veselin Stoyanov, and Heng Ji. 2018. A multi-lingual multi-task architecture for low-resource sequence labeling. In Proceedings of ACL, pages 799–809. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In Proceedings of ACL, pages 1064–1074. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and optimizing LSTM language models. In Proceedings of ICLR. Naoaki Okazaki. 2007. CRFsuite: a fast implementation of conditional random fields (CRFs). URL: http://www.chokkan.org/software/ crfsuite. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL, pages 2227– 2237. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pretraining. URL: https://openai.com/blog/ language-unsupervised. 5904 Marek Rei. 2017. Semi-supervised multitask learning for sequence labeling. In Proceedings of ACL, pages 2121–2130. Marek Rei and Anders Søgaard. 2018. Zero-shot sequence labeling: Transferring knowledge from sentences to tokens. In Proceedings of NAACL, pages 293–302. Marek Rei and Anders Søgaard. 2019. Jointly learning to label sentences and tokens. In Proceedings of AAAI, pages 6916–6923. Sebastian Ruder. 2017. An overview of multitask learning in deep neural networks. arXiv:1706.05098. Alexander Rush. 2018. The annotated transformer. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 52–60. Sheng-syun Shen and Hung-yi Lee. 2016. Neural attention models for sequence classification: Analysis and application to key term extraction and dialogue act detection. In Proceedings of INTERSPEECH, pages 2716–2720. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NIPS, pages 5998– 6008. Dingquan Wang, Nanyun Peng, and Kevin Duh. 2017. A multi-task learning approach to adapting bilingual word embeddings for cross-lingual named entity recognition. In Proceedings of IJCNLP, pages 383–388. Taiki Watanabe, Akihiro Tamura, Takashi Ninomiya, Takuya Makino, and Tomoya Iwakura. 2019. Multitask learning for chemical named entity recognition with chemical compound paraphrasing. In Proceedings of EMNLP-IJCNLP, pages 6243–6248. Hang Yan, Bocao Deng, Xiaonan Li, and Xipeng Qiu. 2019. TENER: adapting transformer encoder for named entity recognition. arXiv:1911.04474. A Statistics of NER tags Table 3 shows the statistics of NER tags in the training, development, and test sets. 5905 NER Type VI TH ID Train Dev Test Train Dev Test Train Dev Test BRAND 358 160 170 725 408 387 490 215 229 COLOR 488 249 195 640 298 322 582 277 295 CONSUMER_GROUP 763 369 341 399 238 217 1910 1098 1026 MATERIAL 291 154 135 490 258 221 260 109 151 PATTERN 843 435 392 501 273 245 1021 537 493 PRODUCT 1982 964 963 2808 1473 1521 4786 2584 2557 TOTAL 4725 2331 2196 5563 2948 2913 9049 4820 4751 Table 3: Statistics of NER tags.
2020
523
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5906–5917 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5906 Multi-Cell Compositional LSTM for NER Domain Adaptation Chen Jia†‡ and Yue Zhang‡§ †Fudan University, China ‡School of Engineering, Westlake University, China §Institute of Advanced Technology, Westlake Institute for Advanced Study, China {jiachen,zhangyue}@westlake.edu.cn Abstract Cross-domain NER is a challenging yet practical problem. Entity mentions can be highly different across domains. However, the correlations between entity types can be relatively more stable across domains. We investigate a multi-cell compositional LSTM structure for multi-task learning, modeling each entity type using a separate cell state. With the help of entity typed units, cross-domain knowledge transfer can be made in an entity type level. Theoretically, the resulting distinct feature distributions for each entity type make it more powerful for cross-domain transfer. Empirically, experiments on four few-shot and zeroshot datasets show our method significantly outperforms a series of multi-task learning methods and achieves the best results. 1 Introduction Named entity recognition (NER) is a fundamental task in information extraction, providing necessary information for relation classification (Mooney and Bunescu, 2006), event detection (Popescu et al., 2011), sentiment classification (Mitchell et al., 2013), etc. NER is challenging because entity mentions are an open set and can be ambiguous in the context of a sentence. Due to relatively high cost in manual labeling, cross-domain NER has received increasing research attention. Recently, multi-task learning methods (Yang et al., 2017; Wang et al., 2018, 2019; Zhou et al., 2019; Jia et al., 2019) have achieved great success for cross-domain NER. Other methods such as fine-tuning (Rodriguez et al., 2018), share-private (Cao et al., 2018; Lin and Lu, 2018) and knowledge distill (Yang et al., 2019) also show effectivenesses for cross-domain NER. There are three main source of challenges in cross-domain NER. First, instances of the same type entities can be different across domains. For example, typical person names can include “Trump” and “Clinton” in the political news domain, but “James” and “Trout” in the sports domain. Second, different types of entities can exhibit different degrees of dissimilarities across domains. For example, a large number of location names are shared in the political news domain and the sports domain, such as “Barcelona” and “Los Angeles”, but the case is very different for organization names across these domains. Third, even types of entities can be different across domains. For example, while disease names are a type of entities in the medical domain, it is not so in the biochemistry domain. We investigate a multi-cell compositional LSTM structure to deal with the above challenges by separately and simultaneously considering the possibilities of all entity types for each word when processing a sentence. As shown in Figure 1, the main idea is to extend a standard LSTM structure by using a separate LSTM cell to model the state for each entity type in a recurrent step. Intuitively, the model differs from the baseline LSTM by simultaneously considering all possible entity types. A compositional cell (C cell) combines the entity typed cells (ET cells) for the next recurrent state transition by calculating a weighted sum of each ET cell, where the weight of each ET cell corresponds to the probability of its corresponding entity type. Different from naive parameter sharing on LSTM (Yang et al., 2017), source domain and target domain in our multi-task learning framework share only the ET cells corresponding to the same entity types and the same C cell, but not for the domain-specific ET cells. In this way, our model learns domain-invariant in the entity level. Intuitively, our model addresses the above challenges by modeling entity type sequences more explicity, which are relatively more robust across domains compared with entity instances. For example, the pattern “PER O PER O LOC” can exist in both the political and sports domains, despite 5907 ) (~ t c ) (t c ) (t h ) (t h ) (t c ) (t h ) 1 (  t c ) 1 ( -t h ) (t o ) (ti ) (t f ) (t w ) (t w ) (~ˆ t c ) (ˆ t c ) (ˆ t h ) 1 (ˆ -t c ) 1 (ˆ -t h ) (ˆ t c ) (ˆ t h ) (ˆ t h ) (t lc ) (~ t lc ) (ˆ t o ) (ˆ ti ) (ˆ 1 ti  ) (t li ) ( 1 t li  ) (ˆ t c (a) Baseline LSTM unit. (b) Multi-cell compositional LSTM unit. ET Cell 2 ) (t l k t k t k c ) ( ) (  ... ET Cell l ET Cell 1 ) ( 1 t c ) ( 1~ t c ) ( 2 t c ) ( 2~ t c ) ( 1 t  ) ( 2 t  C Cell ) ( 1 1 ti  ) ( 2 1 ti  ) ( 2 ti ) ( 1 ti Compositional Cell ... ... ... CRF(Tgt) CRF(Src) (c) Multi-task learning framework. Word Embs Char CNN Target Domain Source Domain 𝛼𝑘 𝑆𝑐𝑘 𝑆 𝑘∈𝜀𝑆 𝛼𝑘 𝑇𝑐𝑘 𝑇 𝑘∈𝜀𝑇 Figure 1: Overall structures. The red, blue and purple in (c) represent target, source and shared parts, respectively. that the specific PER instances can be different. In addition, thanks to the merging operation at each step, our method effectively encodes multiple entity type sequences in linear time by having a sausage shaped multi-cell LSTM. Thus it allows us to learn distributional differences between entity type chains across domains. This effectively reduces the confusions of different entities when source domain and target domain have different entity types in few-shot transfer, where the target domain has a few training data. In zero-shot transfer where the target domain has no training data, a target-domain LM transfers source-domain knowledge. This knowledge transfer is also in the entity level thanks to the compositional weights which are supervised by gold-standard entity type knowledge in source-domain training. Theoretically, our method creates distinct feature distributions for each entity type across domains, which can give better transfer learning power compared to representation networks that do not explicitly differentiate entity types (§3.4). Empirically, experiments on four fewshot and zero-shot datasets show that our method gives significantly better results compared to standard BiLSTM baselines with the same numbers of parameters. In addition, we obtain the best resutls on four cross-domain NER datasets. The code is released at https://github.com/ jiachenwestlake/Multi-Cell_LSTM. 2 Method Given a sentence x = [x1, . . . , xm], the vector representation wt for each word xt is the concatenation of its word embedding and the output of a character level CNN, following Yang et al. (2018). A bi-directional LSTM encoder is used to obtain sequence level features h = [h1, . . . , hm]. We use the forward LSTM component to explain the details in the following subsections. Finally, a CRF layer outputs the label sequence y = l1, . . . , lm. 2.1 Baseline LSTM We adopt the standard LSTM (Graves and Schmidhuber, 2005) for the baseline. At each time step t (t ∈[1, ..., m]), the baseline calculates a current hidden vector h(t) based on a memory cell c(t). In particular, a set of input gate i(t), output gate o(t) and forget gate f(t) are calculated as follows:   i(t) o(t) f (t) ec(t)  =   σ σ σ tanh    W[h(t−1); w(t)] + b  c(t) = i(t) ⊙ec(t) + f (t) ⊙c(t−1) h(t) = o(t) ⊙tanh(c(t)), (1) where [W; b] are trainable parameters. σ represents the sigmoid activation function. 2.2 Multi-Cell Compositional LSTM As shown in Figure 1 (b), we split cell computation in the baseline LSTM unit into l copies, each corresponding to one entity type. These cells are shown in black. A compositional cell (shown in red) is used to merge the entity typed LSTM cells into one cell state for calculating the final hidden vector. In this process, a weight is assigned to each entity type according to the local context. Entity typed LSTM cells (ET cells). Given w(t) and ˆh(t−1), the input gate i(t) k and the temporary memory cell state ec(t) k of the k-th (k ∈[1, . . . , l]) entity typed cells (ET cells) are computed as: " i(t) k ec(t) k # =  σ tanh   Wk[ˆh(t−1); w(t)] + bk  , (2) where the [Wk; bk] represent the trainable parameters specific to the k-th ET cell. 5908 Then a copy of the compositional memory cell state ˆc(t−1) of the previous time step (t−1) is used to update the temporary memory cell state. c(t) k = i(t) k ⊙ec(t) k + (1 −i(t) k ) ⊙ˆc(t−1) (3) The above operations are repeated for l ET cells with the same ˆc(t−1). We finally acquire a list of ET cell states [c(t) 1 , . . . , c(t) l ]. Compositional LSTM cell (C cell). For facilitating integration of ET cells, a input gate ˆi(t) and a temporary cell state eˆc (t) of the compositional cell (C cell) are computed similarly to those of the ET cells, but another output gate ˆo(t) is added, which are computed as follows:   ˆi(t) ˆo(t) eˆc (t)  =   σ σ tanh    ˆ W[ˆh(t−1); w(t)] + ˆb  , (4) where [ ˆ W; ˆb] are trainable parameters of the C cell. Merging. We use the temporary cell state of the C cell eˆc (t) to weigh the internal representations of ET cells [c(t) 1 , . . . , c(t) l ] for obtaining a compositional representation. To this end, additive attention (Dzmitry et al., 2015) is used, which achieves better results in our development compared with other attention mechanism (Vaswani et al., 2017). The temporary memory cell state of the C cell ˆc(t) α is a weighted sum of [c(t) 1 , . . . , c(t) l ]: ˆc(t) α = l X k=1 α(t) k c(t) k s.t. l X k=1 α(t) k = 1 (5) The weight α(t) k reflects the similarity between eˆc (t) and the k-th ET cell state c(t) k . α(t) k is computed as: I(t) k = v⊤tanh(Peˆc (t) + Qc(t) k ) α(t) k = exp(I(t) k ) Pl j=1 exp(I(t) j ) , (6) where [P; Q; v] are trainable parameters. The memory cell state of the C cell is updated as: ˆc(t) = ˆi(t) ⊙ˆc(t) α + (1 −ˆi(t)) ⊙ˆc(t−1) (7) Finally, we obtain the hidden state ˆh(t): ˆh(t) = ˆo(t) ⊙tanh(ˆc(t)) (8) 2.3 Training Tasks Below we discuss the two auxiliary tasks before introducing the main NER task. The auxiliary tasks are designed in addition to the main NER task in order to better extract entity type knowledge from a set of labeled training data for training ET cells and C cell. Formally, denote a training set as Dent = {(xn, en)}N n=1, where each training instance consists of word sequence x = [x1, . . . , xm] and its corresponding entity types e = [e1, . . . , em]. Here each entity type et is a label such as [PER, O, LOC,...] without segmentation tags (e.g., B/I/E). Entity type prediction. Given the ET cell states of xt: c(t) = [−→c (t) 1 ⊕←−c (t) 1 , . . . , −→c (t) l ⊕←−c (t) l ], we define the aligned entity distribution for xt: p(ek|xt) = exp{w⊤ k c(t) k + bk} Pl j=1 exp{w⊤ j c(t) j + bj} , (9) Where [wk; bk] are parameters specific to the k-th entity type ek. The negative log-likehood loss is used for training on Dent : Lent = − 1 |Dent| N X n=1 m X t=1 log(p(en t |xn t )) (10) Attention scoring. Similar to the entity type prediction task, given the attention scores between the temporary C cell and ET cells in Equation 6: I(t) = [(−→I (t) 1 + ←−I (t) 1 )/2, . . . , (−→I (t) l + ←−I (t) l )/2], we convert the attention scores to entity aligned distributions for xt using softmax: p(ek|xt) = exp(I(t) k ) Pl j=1 exp(I(t) j ) (11) Similar to the loss of entity type prediction: Latten = − 1 |Dent| N X n=1 m X t=1 log(p(en t |xn t )) (12) While entity type prediction brings supervised information to guide the ET cells, attention scoring introduces supervision to guide the C cell. NER. This is the main task across domains. Standard CRFs (Ma and Hovy, 2016) are used. Given h = [−→ h 1 ⊕←− h 1, . . . , −→ h m ⊕←− h m], the output probability p(y|x) over labels y=l1, . . . , lm is: p(y|x)= exp{P t(wlt CRF · ht + b (lt−1,lt) CRF )} P y′ exp{P t(w l′ t CRF · ht + b (l′ t−1,l′ t) CRF )} , (13) where y′ represents an arbitary labal sequence, and wlt CRF is a model parameter specific to lt, and b(lt−1,lt) CRF is a bias specific to lt−1 and lt. 5909 Algorithm 1 Transfer learning Input: Source-domain NER dataset Sner, target-domain NER dataset Tner or raw data Tlm and entity dictionary De Output: Target-domain model 1: while training steps not end do 2: for d in { Source, Target } do 3: for w(t) in [w(1), . . . , w(m)] do 4: {c(t) k }k∈Ed←{Ck(ˆh(t−1), w(t), ˆc(t−1))}k∈Ed eˆc (t) ←ˆC(ˆh(t−1), w(t)) {ˆh(t), ˆc(t)}←Atten  eˆc (t), {c(t) k }k∈Ed  (eq.2-8) 5: end for 6: Compute Ld a ←λentLent + λattenLatten 7: if d = Source then 8: Compute LS m ←LS ner 9: else if d = Target then 10: if do SDA then 11: Compute LT m ←LT ner 12: else if do UDA then 13: Compute LT m ←LT lm 14: end if 15: end if 16: L ←L + λdLd m + Ld a 17: end for 18: Update paremeters of networks based on L. 19: end while A sentence-level negative log-likehood loss is used for training on Dner={(xn, yn)}N n=1: Lner = − 1 |Dner| N X n=1 log(p(yn|xn)) (14) 3 Transfer Learning The multi-cell LSTM structure above is domain agnostic, and can therefore be used for in-domain NER too. However, the main goal of the model is to transfer entity sequence knowledge across domains, and therefore the ET cells and C cell play more significant roles in the transfer learning setting. Below we introduce the specific roles each cell is assigned in cross-domain settings. 3.1 Multi-Task Structure Following the common cross-domain setting, we use source-domain NER dataset Sner and the targetdomain NER dataset Tner or raw data Tlm. The entity type sets of source and target domains are represented as Ed, where d ∈{S, T}, respectively. As shown in Figure 1 (c), our multi-task learning structure follows Yang et al. (2017), which consists of shared embedding layer and shared BiLSTM layer, as well as domain-specific CRF layers. Our method replaces LSTM with multi-cell LSTM, following we introduce the multi-task parameter sharing mechanism in multi-cell LSTM. ET cells. All ET cells {Ck}k∈ES∪ET in multicell LSTM are a composion of entity-specific cells from both source and target domains. For each domain d ∈{S, T}, the actually used ET cells are the domain-specific subset {Ck}k∈Ed, aiming to conserve domain-specific features. C cell. In order to make the source and target domains share the same feature space in a word level, we use a shared C cell ˆC across domains. 3.2 Unsupervised Domain Adaptation To better leverage target-domain knowledge without target-domain NER labeled data, we conduct the auxiliary dictionary matching and language modeling tasks on target-domain raw data Tlm = {(xn)}N n=1. Auxiliary tasks. To better extract entity knowledge from raw data, we use a pre-collected named entity dictionary De by Peng et al. (2019) to label Tlm and obtain a set of entity words D+ ent, which are used to train entity prediction task and attention scoring task jointly. Language modeling. Follwing Jia et al. (2019), we use sampling softmax to compute forward LM probability pf(xt|x<t) and backward LM probability pb(xt|x>t), respectively: pf(xt|x<t)= 1 Z exp  w⊤ xt −→ h t−1+bxt  pb(xt|x>t)= 1 Z exp  w⊤ xt ←− h t+1+bxt  , (15) where wx and bx are the target word vector and bias, respectively. Z is the normalization item computed by the target word and negative samples. The LM loss function on Tlm is: LT lm = − 1 2 |Tlm| N,m X n,t=1 n log(pf(xn t |xn <t)) + log(pb(xn t |xn >t)) o (16) 3.3 Training Objective Algorithm 1 is the transfer learning algorithm under both supervised and unsupervised domain adaptation settings. Both source- and target-domain training instances undertake auxiliary tasks and obtain the loss La, which is a combination of Lent and Latten weighted by λent and λatten, respectively (line 6). Supervised domain adaptation. The auxiliary tasks as well as source- and target-domain NER tasks (line 8, 11) form the final training objective: LSDA = X d∈{S,T } n λdLd ner + Ld a o + λ 2 ∥Θ∥2, (17) 5910 where λd (d ∈{S, T}) are the domain weights for NER tasks. λ is the L2 regularization parameters and Θ represents the parameters set. Unsupervised domain adaptation. The training objective for UDA is similar to that of SDA, except for using target-domain LM task (line 13) instead of target-domain NER task: LUDA = LS ner + LT lm + LS a + LT a + λ 2 ∥Θ∥2 (18) 3.4 Theoretical Discussion Below we show theoretically that our method in §2.2 is stronger than the baseline method in §2.1 for domain adaptation. Following Ben-David et al. (2010), a domain is defined as a pair of input distribution D on X and a labeling function y: X→Y, where Y is a (l −1)-simplex1. According to this definition, <DS, yS > and <DT , yT > represent source and target domains, respectively. A hypothesis is a function h: X→{1, ..., l}, which can be a classification model. Target-domain error is defined as the probability hT disagrees with yT , ϵ(hT ) = ϵ(hT , yT ) = Ex∼DT [|yT −hT (x)|]. The training target for h is to minimize a convex weighted combination of source and target errors, ϵα(h) = αϵT (h) + (1 − α)ϵS(h), where α ∈[0, 1) is the domain weight, when α = 0, it is the setting of UDA. Theorem 1 Let h be a hypothesis in class H, then: ϵT (h) ≤ϵα(h) + (1 −α) 1 2dH∆H (DS, DT ) + λ  , where dH∆H (DS, DT ) = 2 sup h′,h′′∈H Prx∼DS  h′(x) ̸= h′′(x)  −Prx∼DT  h′(x) ̸= h′′(x)  Here λ is a constant that values the shared error of the ideal joint hypothesis. In dH∆H(DS, DT ), sup denotes the supremum of the right term for ∀h′, h′′∈H. Prx∼DS [h′(x) ̸= h′′(x)] denotes the probability according to the distribution DS that h′ disagrees with h′′ and Prx∼DT [h′(x) ̸= h′′(x)] is similar. Intuitively, the theorem states the upper bound of ϵT (h) based on ϵα(h) and the distance between DS and DT in the H∆H space, which is measured as the discrepancy between the two classifiers h′ and h′′. 1l is the total number of entity types in the source and target domains, such as {O, PER, LOC, ORG, MISC}. Our discussion also makes sense in the case that source domain and target domain have different entity types. The original theorem, however, concerns only one model h for transfer learning. In our supervised settings, in contrast, their CRF layers are specific to the source and target domains, respectively. Below we use h∗to denote our overall model with shared multi-cell LSTM model and domain-specific CRF layers. Further, we use h1 to denote the target domain subsystem that consists of the shared multicell LSTM model and the target-specific CRF layer, and h2 to denote its source counterpart. Theorem 1 can be extended to our settings as follows: Lemma 1 If ϵα(h∗) = αϵT (h1) + (1 −α)ϵS(h2), then: ϵT (h1) ≤2ϵα(h∗) + (1 −α) 3 2dH∆H (DS, DT ) + λ∗ Proof. The proof is mainly based on trangle inequalities, see Appendix A for details. □ Considering that the upper bounds of ϵT (h) (ϵT (h1)), ϵα(h) (ϵα(h∗)) and λ (λ∗) are small when training converges, our goal is to reduce dH∆H(DS, DT ). In particular, we define a model h is a composition function h = g ◦f, where f represents the multi-cell LSTM model and g represents the CRF layer, ◦denotes function composition. We assume h′ and h′′ share the same multi-cell LSTM model, namely h′= g′ ◦f and h′′= g′′◦f, we have dH∆H(DS, DT ) =2 sup g′, g′′∈G Prx∼DS  g′◦f(x)̸=g′′◦f(x)  −Prx∼DT  g′ ◦f(x)̸=g′′ ◦f(x)  To obtain the supremum of the right term, we may wish to assume that both g′ and g′′ can classify correctly in the source domain, then dH∆H(DS, DT )≈2 sup g′, g′′∈G Prx∼DT  g′◦f(x)̸=g′′◦f(x)  The optimization objective is as follows: min f∈F sup g′, g′′∈G Prx∼DT  g′ ◦f(x) ̸= g′′ ◦f(x)  Aiming to minf∈F dH∆H(DS, DT ), we decompose the unified feature space into several entity typed distributions using multi-cell LSTM, resulting in that source- and target-domain features belonging to the same entity type are clustered together. The proof is mainly based on the cluster assumption (Chapelle and Zien, 2005), which is equivalent to the low density separation assumption, states that the decision boundary should lie on a low-density region. According to the cluster assumption, both g′ and g′′ tend to cross the low-density regions in the shared 5911 Dataset Entity Type Size Train Dev Test CoNLL-2003 PER, LOC #Sentence 15.0K 3.5K 3.7K ORG, MISC #Entity 23.5K 5.9K 5.6K Broad Twitter PER, LOC #Sentence 6.3K 1.0K 2.0K ORG #Entity 8.8K 1.7K 4.4K Twitter PER, LOC #Sentence 4.3K 1.4K 1.5K ORG, MISC #Entity 7.5K 2.5K 2.5K BioNLP13PC CHEM, CC #Sentence 2.5K 0.9K 1.7K GGP, etc. #Entity 7.9K 2.7K 5.3K BioNLP13CG CHEM, CC #Sentence 3.0K 1.0K 1.9K GGP, etc. #Entity 10.8K 3.6K 6.9K CBS News PER, LOC #Sentence 2.0K ORG, MISC #Entity 3.4K Table 1: Statistic of datasets. feature space of both source and target domains. This results in Prx∼DT [g′ ◦f(x)̸=g′′ ◦f(x)] ≈ Prx∼DS [g′ ◦f(x)̸=g′′ ◦f(x)] ≈0, which well meets the above optimization objecive. 4 Experiments 4.1 Experimental Settings Datasets. We take six publicly available datasets for experiments, including BioNLP13PC and BioNLP13CG (N´edellec et al., 2013), CoNLL2003 English dataset (Sang and Meulder, 2003), Broad Twitter dataset (Derczynski et al., 2016), Twitter dataset (Lu et al., 2018) and CBS SciTech News dataset (Jia et al., 2019). Statistics of the datasets are shown in Table 1. In unsupervised domain adaptation experiments, 398,990 unlabeled sentences from CBS SciTech News collected by Jia et al. (2019) are used for target-domain LM training, a named entity dictionary from Web resource collected by Peng et al. (2019) is used for target-domain auxiliary tasks training. The CoNLL-2003, Twitter and CBS News have the same four types of entities, namely PER (person), LOC (location), ORG (organization) and MISC (miscellaneous). The Broad Twitter dataset consists of three types: PER, LOC and ORG. BioNLP13CG mainly consists of five types, namely CHEM (simple chemical), CC (cellular component), GGP (gene and gene product), SPE (species) and CELL (cell), BioNLP13PC mainly consists of three types: CHEM, CC and GGP. Hyperparameters. We choose NCRF++ (Yang and Zhang, 2018) for developing the models. The multi-task baselines are based on Jia et al. (2019). Our hyperparameter settings largely follow Yang et al. (2018); word embeddings for all models are initialized with PubMed 200-dimension vectors (Chiu et al., 2016) in BioNLP experiments and 20 40 60 80 100 Iteration 0.55 0.60 0.65 0.70 0.75 0.80 F-score NER Entity Prediction (right) Attention Scoring (right) 0.86 0.88 0.90 0.92 0.94 0.96 0.98 Accuracy (a) BioNLP. 20 40 60 80 100 Iteration 0.60 0.65 0.70 0.75 0.80 F-score NER Entity Prediction (right) Attention Scoring (right) 0.86 0.88 0.90 0.92 0.94 0.96 0.98 Accuracy (b) Twitter. Figure 2: Performances of the main NER and auxiliary tasks against the total number of training iteratons. GloVe 100-dimension vectors (Pennington et al., 2014) in other experiments. All word embeddings are fine-tuned during training. Character embeddings are randomly initialized. 4.2 Development Experiments Figure 2 shows the performances of the main targetdomain NER task and the auxiliary entity prediction and attention scoring tasks on the development sets of BioNLP13CG and Twitter when the number of training iterations increases. As can be seen from the figure, all the three tasks have the same trend of improvement without potential conflicts between tasks, which shows that all the three tasks take the feature space of the same form. 4.3 Supervised Domain Adaptation We conduct supervised domain adaptation on BioNLP dataset, Broad Twitter dataset and Twitter dataset, respectively. In particular, for the BioNLP dataset, BioNLP13CG is used as the target-domain NER dataset and BioNLP13PC as the source-domain dataset. These two datasets have some different entity types. In the Broad Twitter dataset, Broad Twitter is used as the target-domain dataset and the CoNLL-2003 as the source-domain dataset. These two datasets have a different entity type MISC. In the Twitter dataset, Twitter is used as the target-domain dataset and the CoNLL-2003 as the source-domain dataset. These two datasets have the same entity types. The overall results are listed in Table 2. Target-domain only settings. In comparison with target-domain only models BILSTM and MULTI5912 Methods Datasets BioNLP Broad Twitter Twitter F1 #Params F1 #Params F1 #Params Crichton et al. (2017) 78.90 Lu et al. (2018) 80.75 Wang et al. (2019) 82.48 Jia et al. (2019) 79.86 BILSTM+ELMO (Peters et al., 2018) 76.48 94,590K 82.83 94,631K BILSTM+BIOELMO (Peters et al., 2018) 85.61 94,605K BERT-BASE (Devlin et al., 2019) 77.28 108M 83.77 108M BIOBERT-BASE (Lee et al., 2020) 85.72 108M BILSTM 79.24 304K 72.98 210K 77.18 211K MULTI-CELL LSTM 78.76 2,704K 72.54 641K 77.05 743K MULTI-TASK (LSTM) 81.06 309K 73.84 214K 79.55 215K MULTI-TASK (LSTM)[REPRO]∗ 81.45 312K 73.82 214K 79.90 215K MULTI-TASK+PGN 81.17 4,533K 73.70 3,238K 80.07 3,239K MULTI-TASK+GRAD 81.63 447K 74.12 342K 79.72 344K OURS 83.12† 2,929K 74.82† 827K 81.37† 828K OURS+ELMO/BIOELMO 86.65 105M 76.36 97,090K 84.31 97,091K OURS+BERT-BASE/BIOBERT-BASE 86.96†‡ 117M 78.43†‡ 111M 85.80†‡ 111M Table 2: Results on three few-shot datasets. ∗indicates that we reproduce the baseline bi-directional LSTM in a similar way to our model for fair comparisons. † indicates statistical significance compared to target-domain settings and cross-domain settings with p < 0.01 by t-test. ‡ indicates statistical significance compared to LM pre-training based methods with p < 0.01 by t-test. CELL LSTM, all of the multi-task models obtain significantly better results on all of the three datasets. This shows the effectiveness of multi-task learning in few-shot transfer. Cross-domain settings. We make comparisons with the traditional parameter sharing mechanism MULTI-TASK(LSTM) (Yang et al., 2017) together with two improved methods, MULTI-TASK+PGN (Jia et al., 2019), which adds an parameter generation networks (PGN) to generate parameters for source- and target-domain LSTMs and MULTITASK+GRAD (Zhou et al., 2019), which adds a generalized resource-adversarial discriminator (GRAD) and leverages adversarial training. The results show that our method can significantly outperform these multi-task methods on the same datasets, which shows the effectiveness of our multi-cell structure in cross-domain settings. Comparison with the state-of-the-art models. Results show that our model outperforms crossdomain method of Jia et al. (2019), cross-type method of Wang et al. (2019) and methods using addition features (Crichton et al., 2017; Lu et al., 2018). Recently, LM pre-training based methods such as ELMO/BIOELMO (Peters et al., 2018), BERT (Devlin et al., 2019) and BIOBERT (Lee et al., 2020) achieve state-of-the-art results on NER. However, these methods use additional large-scale language resources, thus it is unfair to make direct comparisons with our method. Thus we leverage the outputs of LM pre-training methMethods F1 #Params #Raw Jia et al. (2019) 73.59 12,916K 18,474K BERT-BASE (Devlin et al., 2019) 74.23 108M 3,700M BILSTM 70.73 211K MULTI-CELL LSTM 70.03 743K BILSTM+LM 71.30 211K 1,931K BILSTM+LM+DICT 72.49 212K 1,931K MULTI-CELL LSTM+LM 72.81 743K 1,931K MULTI-CELL LSTM+LM(ALL) 73.56 743K 8,664K MULTI-CELL LSTM+LM+DICT 75.19† 743K 1,931K Table 3: Results on CBS News datasets. #Raw indictates number of words in raw data used in the experiment. † indicates statistical significance compared with all of the baselines with p < 0.01 by t-test. ods as contextualized word embeddings. In particular, we use the same batch size as our method and the Adam optimizer with an initial learning rate 3e-5 in BERT fine-tuning baselines. Results show that our method benifits from these LM pretraining output features and outperforms these LM pre-training based methods. 4.4 Unsupervised Domain Adaptation We conduct unsupervised domain adaptation on the CBS SciTech News test set, using CoNLL-2003 as the source-domain dataset. The overall results are listed in Table 3. Adding target-domain LM training. Only using the source-domain NER data, BILSTM and MULTICELL LSTM give comparable results, 70.73% F1 and 70.03% F1, respectively. In comparison with the source-domain only models, all of the models 5913 60 40 20 0 20 40 60 60 40 20 0 20 40 60 O(S) O(T) MISC(S) LOC(S) LOC(T) ORG(S) ORG(T) PER(S) PER(T) Figure 3: t-SNE visualization of ET cell states {ck}l k=1 on the CoNLL-2003 test set and Broad Twitter test set, differentiated by signal star and dot, respectively. Different entity types are represented by different colours. using LM obtain significantly better results, which shows the effectiveness of using target-domain LM in zero-shot transfer. When using the same amount of target-domain raw data as Jia et al. (2019), The result of MULTI-CELL LSTM+LM(ALL) is comparable to the state-of-the-art (Jia et al., 2019) (73.56% F1 v.s. 73.59% F1), which uses both source-domain LM and target-domain LM. This shows the effectiveness of multi-cell structure for zero-shot transfer. Adding a named entity dictionary. With the named entity dictionary collected by Peng et al. (2019), the results show a significant improvement (75.19% F1 v.s. 72.81% F1). To make fair comparison, we add the entity dictionary information to BILSTM+LM by doing an entity type prediction task together with the target-domain LM. BILSTM+LM+DICT achieves better result than BILSTM+LM (72.49% F1 v.s. 71.30% F1), but it still cannot be comparable to our results. This shows that the auxiliary tasks can help learn entity knowledge from raw data, even if the named entity dictionary can not label all entities in a sentence. 4.5 Analysis Visualization. In the proposed multi-cell LSTM, both ET cells and C cell play important roles in constructing a shared feature spaces across domains. We visualize feature spaces of ET cells and C cell in the Broad Twitter experiments. Figure 3 uses t-SNE (Maaten and Hinton, 2008) to visualize the ET cell states {ck}l k=1. From the figure we can see that different ET cells can generate different feature distributions (gathering in different clusters of different colours), and states Entity group CHEM CC GGP CELL SPE All Is in Source? ✓ ✓ ✓ × × LSTM F1 69.13 78.29 82.79 85.00 79.08 79.23 ∆ MULTI F1 73.57 79.67 85.83 85.14 79.47 81.05 ∆ +4.44 +1.38 +3.04 +0.14 +0.39 +1.82 Ours F1 74.95 80.00 86.67 87.10 81.92 82.70 ∆ +5.82 +1.71 +3.88 +2.10 +2.84 +3.47 Table 4: Fine-grained comparisons on BioNLP. of the same ET cell gather together across domains. This indicates that our model can learn cross-domain entity typed knowledge with the help of ET cells, which are more robust across domains. Figure 4 visualize the hidden vectors of the target-domain only baseline, the multi-task baseline and the proposed model. From the figure, we can see that both the multi-task baseline and ours can obtain similar feature distributions across domains compared with the target-domain only baseline. In comparison with the multi-task baseline, our model also shows strong matches across domains in an entity type level, which can better narrow the gap between source and target domains as discussed in §3.4. Fine-grained comparison. We make fine-grained comparisons between our model and the multi-task baseline on the BioNLP dataset, aiming to show how our model achieves better results on the entity type level. Following Crichton et al. (2017) and Jia et al. (2019), we study five well studied entity groups (not including all entity types) in BioNLP13CG. As shown in Table 4, both MULTI (Multi-Task baseline) and Ours achieve significant F1 improvement over the target-domain only baseline LSTM on the biochemistry entity groups that appear in both the target and the source datasets, such as CHEM, CC and GGP, which is consistent with intuition. But for biology entity groups not appearing in the source dataset, such as CELL and SPE, MULTI using traditional parameter sharing hardly improves the performances (+0.14% F1 for CELL and +0.39% F1 for SPE v.s. +1.82% F1 for All). In contrast, Ours achieves relatively strong improvements (+2.10% F1 for CELL and +2.84% F1 for SPE). This benefits from the distinct feature distributions across entity types by the multi-cell LSTM structure, which can effectively prevent the confusions drawn in a unified feature space. Ablation study. We conduct ablation studies on auxiliary tasks and model parameters. The results 5914 40 20 0 20 40 40 20 0 20 40 60 O(S) PER(S) LOC(S) ORG(S) MISC(S) O(T) PER(T) ORG(T) LOC(T) (a) Target-domain only. 40 20 0 20 40 40 20 0 20 O(S) PER(S) LOC(S) ORG(S) MISC(S) O(T) PER(T) ORG(T) LOC(T) (b) Multi-Task baseline. 40 20 0 20 40 20 0 20 40 O(S) PER(S) LOC(S) ORG(S) MISC(S) O(T) PER(T) ORG(T) LOC(T) (c) Ours. Figure 4: t-SNE visualization of hidden vectors on the CoNLL-2003 test set and Broad Twitter test set, represented by signal star and dot, respectively. Different entity types are represented by different colours. Methods Datasets BioNLP Broad Twitter CBS News F1 ∆ F1 ∆ F1 ∆ OURS 83.15 74.82 75.19 −Lent 82.71 -0.44 73.97 -0.85 74.95 -0.24 −Latten 81.65 -1.50 73.25 -1.57 73.04 -2.15 −Lent−Latten 81.74 -1.41 73.64 -1.18 72.59 -2.60 BILSTM-BASED 81.06 -2.09 73.84 -0.98 72.49 -2.70 STACKED BILSTMS 80.61 -2.54 73.86 -0.96 69.62 -5.57 HIDDEN EXPANSION 80.32 -2.83 72.34 -2.48 73.17 -2.02 Table 5: Ablation studies on BioNLP, Broad Twitter and CBS SciTech News datasets. are listed in Table 5. Auxiliary tasks. When we only ablate Lent, the results on all of the three datasets suffer significant decline (-0.44% F1 on BioNLP dataset, -0.85% F1 on Broad Twitter dataset and -0.24% F1 on CBS News dataset, respectively). When we only ablate Latten, the results on all of the three datasets suffer significant decline (over -1.5% F1 on all of the three datasets). When we both ablate Lent and Latten, our model achieves similar results as the BILSTM-BASED baseline. This indicates that domain transfer of our model depends heavily on both auxiliary tasks. Number of parameters. We use two strategies to make the number of parameters of BILSTMBASED baseline comparable to that of our model: (i) STACKED BILSTMS, stacking multi-layer BiLSTMs and enlarging the hidden size. (ii) HIDDEN EXPANSION, with similar model structure, just enlarging the hidden size. Our model still significantly outperforms these baselines, which shows that the effects of our model do not arise from a larger number of parameters. Case study. Table 6 shows a case study, “WHO” is an organization and “Nipah” is a virus. Without using target-domain raw data, BI-LSTM baseline miclassifies “Nipah” as ORG. Both Ours and Sentence The World Health Organization ( WHO ) describes Nipah infection as a “newly emerging zoonosis that causes severe disease in both animals and humans.” BILSTM The World Health Organization ( WHO O) describes Nipah ORG BILSTM+LM The World Health Organization ( WHO O) describes Nipah MISC Ours The World Health Organization ( WHO ORG) describes Nipah MISC Table 6: Example from CBS News test. Red and green represent incorrect and correct entities, respectively. BILSTM+LM give the correct results because this entity is mentioned in raw data. Using the multicell structure, our method learns the pattern “ORG, O, ORG, O” from source data without confusions by target-domain specific entities, thus Ours recognizes “WHO” correctly. 5 Conclusion We have investigated a multi-cell compositional LSTM structure for cross-domain NER under the multi-task learning strategy. Theoretically, our method benefits from the distinct feature distributions for each entity type across domains. Results on a range of cross-domain datasets show that multi-cell compositional LSTM outperforms BiLSTM under the multi-task learning strategy. Acknowledgments We thank the anonymous reviewers for their helpful comments and suggestions. We gratefully acknowledge funding from the National Natural Science Foundation of China (NSFC No.61976180) and the Westlake University and Bright Dream Joint Institute for Intelligent Robotics. Yue Zhang is the corresponding author. References Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman 5915 Vaughan. 2010. A theory of learning from different domains. Machine Learning, 79(1-2):151–175. Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao, and Shengping Liu. 2018. Adversarial transfer learning for chinese named entity recognition with selfattention mechanism. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 182–192. Association for Computational Linguistics. Olivier Chapelle and Alexander Zien. 2005. Semisupervised classification by low density separation. In AISTATS, volume 2005, pages 57–64. Citeseer. Billy Chiu, Gamal Crichton, Anna Korhonen, and Sampo Pyysalo. 2016. How to train good word embeddings for biomedical nlp. In Proceedings of the 15th workshop on biomedical natural language processing, pages 166–174. Association for Computational Linguistics. Koby Crammer, Michael Kearns, and Jennifer Wortman. 2008. Learning from multiple sources. Journal of Machine Learning Research, 9(Aug):1757– 1774. Gamal Crichton, Sampo Pyysalo, Billy Chiu, and Anna Korhonen. 2017. A neural network multi-task learning approach to biomedical named entity recognition. BMC Bioinformatics, 18(1):368. Leon Derczynski, Kalina Bontcheva, and Ian Roberts. 2016. Broad twitter corpus: A diverse named entity recognition resource. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1169– 1179. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Bahdanau Dzmitry, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR, pages 1–15. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural networks, 18(5-6):602–610. Chen Jia, Xiaobo Liang, and Yue Zhang. 2019. Crossdomain ner using cross-domain language modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2464–2474. Association for Computational Linguistics. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234–1240. Bill Yuchen Lin and Wei Lu. 2018. Neural adaptation layers for cross-domain named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2012–2022. Association for Computational Linguistics. Di Lu, Leonardo Neves, Vitor Carvalho, Ning Zhang, and Heng Ji. 2018. Visual attention model for name tagging in multimodal social media. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1990–1999. Association for Computational Linguistics. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Long Papers), volume 1, pages 1064–1074. Association for Computational Linguistics. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9:2579–2605. Margaret Mitchell, Jacqui Aguilar, Theresa Wilson, and Benjamin Van Durme. 2013. Open domain targeted sentiment. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1643–1654. Association for Computational Linguistics. Raymond J Mooney and Razvan C Bunescu. 2006. Subsequence kernels for relation extraction. In Advances in neural information processing systems, pages 171–178. Claire N´edellec, Robert Bossy, Jin-Dong Kim, JungJae Kim, Tomoko Ohta, Sampo Pyysalo, and Pierre Zweigenbaum. 2013. Overview of bionlp shared task 2013. In Proceedings of the BioNLP Shared Task 2013 Workshop, pages 1–7. Association for Computational Linguistics. Minlong Peng, Xiaoyu Xing, Qi Zhang, Jinlan Fu, and Xuanjing Huang. 2019. Distantly supervised named entity recognition using positive-unlabeled learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2409–2419, Florence, Italy. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, volume 4, pages 1532–1543. Association for Computational Linguistics. 5916 Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Association for Computational Linguistics. Ana-Maria Popescu, Marco Pennacchiotti, and Deepa Paranjpe. 2011. Extracting events and event descriptions from twitter. In Proceedings of the 20th international conference companion on World wide web., pages 105–106. Juan Diego Rodriguez, Adam Caldwell, and Alex Liu. 2018. Transfer learning for entity recognition of novel classes. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1974–1985. Association for Computational Linguistics. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–147. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Xuan Wang, Yu Zhang, Xiang Ren, Yuhao Zhang, Marinka Zitnik, Jingbo Shang, Curtis Langlotz, and Jiawei Han. 2019. Cross-type biomedical named entity recognition with deep multi-task learning. Bioinformatics, 35(10):1745–1752. Zhenghui Wang, Yanru Qu, Liheng Chen, Jian Shen, Weinan Zhang, Shaodian Zhang, Yimei Gao, Gen Gu, Ken Chen, and Yong Yu. 2018. Label-aware double transfer learning for cross-specialty medical named entity recognition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1–15. Association for Computational Linguistics. Huiyun Yang, Shujian Huang, Xinyu Dai, and Jiajun Chen. 2019. Fine-grained knowledge fusion for sequence labeling domain adaptation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4188–4197. Association for Computational Linguistics. Jie Yang, Shuailong Liang, and Yue Zhang. 2018. Design challenges and misconceptions in neural sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3879–3889. Association for Computational Linguistics. Jie Yang and Yue Zhang. 2018. Ncrf++: An opensource neural sequence labeling toolkit. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics-System Demonstrations, pages 74–79. Association for Computational Linguistics. Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. In International Conference on Learning Representations. Joey Tianyi Zhou, Hao Zhang, Di Jin, Hongyuan Zhu, Meng Fang, Rick Siow Mong Goh, and Kenneth Kwok. 2019. Dual adversarial neural transfer for low-resource named entity recognition. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3461–3471. Association for Computational Linguistics. A Proof of Lemma 1 Proof. Given the precondition ϵα(h∗) = αϵT (h1)+ (1 −α)ϵS(h2), we use the trangle inequality as follows: ϵT (h1) −ϵα(h∗) ≤ ϵα(h∗) −ϵT (h1) =(1 −α) ϵS(h2) −ϵT (h1) ≤(1 −α) h ϵS(h2) −ϵS(h1, h2) + ϵS(h1, h2) −ϵT (h1, h2) + ϵT (h1, h2) −ϵT (h1) i The trangle inequality in Crammer et al. (2008) states that for a class of models F and expected error function ϵ if for all g1, g2, g3 ∈F, we have ϵ(g1, g2) ≤ϵ(g1, g3) + ϵ(g2, g3). Following the above formular and the definition of dH∆H(·, ·), we can further obtain: ϵT (h1) −ϵα(h∗) ≤(1 −α) h ϵS(h1) + ϵT (h2) + ϵS(h1, h2) −ϵT (h1, h2) i ≤(1 −α)  ϵS(h1) + ϵT (h2) + 1 2dH∆H(DS, DT )  Given the precondition ϵα(h∗) = αϵT (h1) + (1 − α)ϵS(h2), we consider two UDA settings: (i) domain T with hypothesis h1 as the source; (ii) domain S with hypothesis h2 as the source. Using Theorem 1 under α = 0, we can obtain: ϵS(h1) ≤ϵT (h1) + 1 2dH∆H(DS, DT ) + λ1 ϵT (h2) ≤ϵS(h2) + 1 2dH∆H(DS, DT ) + λ2 5917 As the common setting of transfer learning, we set 1>α ≥1 2 and then α 1−α ≥1, further obtaining: ϵS(h1) ≤ α 1 −αϵT (h1) + 1 2dH∆H(DS, DT ) + λ1 Using these conclusions to the previous inequalities, we have: (1 −α)  ϵS(h1) + ϵT (h2) + 1 2dH∆H(DS, DT )  ≤αϵT (h1) + (1 −α)ϵS(h2) + (1 −α) 3 2dH∆H(DS, DT ) + λ1 + λ2  Setting λ∗= λ1 + λ2, which is the shared error of ideal joint hypothesis and use the precondition, ϵα(h∗) = αϵT (h1) + (1 −α)ϵS(h2), we have ϵT (h1) −ϵα(h∗) ≤(1 −α)  ϵS(h1) + ϵT (h2) + 1 2dH∆H(DS, DT )  ≤ϵα(h∗) + (1 −α) 3 2dH∆H(DS, DT ) + λ∗  Finally, we obtain the Lemma 1: ϵT (h1) ≤2ϵα(h∗) + (1 −α) 3 2dH∆H(DS, DT ) + λ∗  □
2020
524
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5918–5928 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5918 Pyramid: A Layered Model for Nested Named Entity Recognition Jue Wang †, Lidan Shou ‡†∗, Ke Chen †, Gang Chen ‡† ‡State Key Laboratory of CAD&CG, †College of Computer Science and Technology, Zhejiang University, China {zjuwangjue,should,chenk,cg}@zju.edu.cn Abstract This paper presents Pyramid, a novel layered model for Nested Named Entity Recognition (nested NER). In our approach, token or text region embeddings are recursively inputted into L flat NER layers, from bottom to top, stacked in a pyramid shape. Each time an embedding passes through a layer of the pyramid, its length is reduced by one. Its hidden state at layer l represents an l-gram in the input text, which is labeled only if its corresponding text region represents a complete entity mention. We also design an inverse pyramid to allow bidirectional interaction between layers. The proposed method achieves state-of-the-art F1 scores in nested NER on ACE-2004, ACE2005, GENIA, and NNE, which are 80.27, 79.42, 77.78, and 93.70 with conventional embeddings, and 87.74, 86.34, 79.31, and 94.68 with pre-trained contextualized embeddings. In addition, our model can be used for the more general task of Overlapping Named Entity Recognition. A preliminary experiment confirms the effectiveness of our method in overlapping NER. 1 Introduction Named Entity Recognition (NER), which aims at identifying text spans as well as their semantic classes, is an essential and fundamental Natural Language Processing (NLP) task. It is typically modeled as a sequence labeling problem, which can be effectively solved by RNN-based approach (Huang et al., 2015; Lample et al., 2016; Ma and Hovy, 2016). However, such formulation oversimplifies the problem and is based on a very strong assumption that entity mentions do not overlap with each other, which is certainly not the real case. In real-world languages, entities might be deeply nested or overlapping, calling for better models to handle such complexity. ∗Corresponding author Former U.N. Ambassador Jeane Kirkpatrick ... Former U.N. U.N. Ambassador Ambassador Jeane Jeane Kirkpatrick ... Former U.N. Ambassador U.N. Ambassador Jeane Ambassador Jeane Kirkpatrick ... Former U.N. Ambassador Jeane U.N. Ambassador Jeane Kirkpatrick ... Former U.N. Ambassador Jeane Kirkpatrick ... layer 1 layer 2 layer 3 layer 4 layer 5 Former U.N. Ambassador Jeane Kirkpatrick ... inputs: labels: ORG ROLE FIRST NAME ROLE PER ROLE PER Figure 1: Pyramid output of a sentence from NNE (Ringland et al., 2019) containing 8 nested entities. Many previous studies have focused on recognizing nested entity mentions. A few works use proprietary structures, such as constituency graph (Finkel and Manning, 2009) or hypergraph (Lu and Roth, 2015; Muis and Lu, 2017), to explicitly capture nested entities. These structures, however, do not produce satisfactory performance results. Some other works handle nested entity mentions in a layered model, which employs multiple flat NER layers(Alex et al., 2007; Ju et al., 2018; Fisher and Vlachos, 2019). Each layer is usually responsible for predicting a group of nested entities having the same nesting level. Unfortunately, conventional layered schemes do not address the more general overlapping setting, and also suffer from layer disorientation. The latter is a problem arising when the model might output a nested entity from a wrong layer. For example, entity “U.N. Ambassador” is labeled as a secondlayer entity (containing “U.N.” and “Ambassador”). Thus, prediction of it from the first layer is considered an error. Generally, a false positive prediction with the correct span and class but from a wrong layer produces an over-estimated loss (despite the correct entity itself), causing the entire model reluctant to predict positive, and eventually harming the recall. This problem occurs quite often, as the 5919 target layer for a nested entity is determined by the nesting levels of its composing entities rather than by its own semantics or structure. A recent study on a layered model (Ju et al., 2018) also reports the error propagation issue, i.e. errors in the first few layers are propagated to the next layers. In this paper, we propose a novel layered model called Pyramid for nested NER. The model consists of a stack of inter-connected layers. Each layer l predicts whether a text region of certain length l, i.e. an l-gram, is a complete entity mention. Between each two consecutive layers of our model, the hidden state sequence is fed into a convolutional network with a kernel of two, allowing a text region embedding in the higher layer to aggregate two adjacent hidden states from the lower layer, and thus forming the pyramid look (as the length of the sequence in the higher layer is one token shorter than the lower layer). Such process enumerates all text spans without breaking the sequence structure. Figure 1 shows a sentence containing eight nested entities being fed into the Pyramid model. These entities are separated into 5 layers according to their number of tokens. The job of each decoding layer is simple and clear – it needs to output entity type when it encounters a complete entity. In the above scheme, the higher decoding layer relies on the output of the lower decoding layer in a bottom-up manner (from layer 1 to 5 in Figure 1). It is also desirable to construct an inverse pyramid, where a lower decoding layer receives input from a higher layer (from layer 5 to 1), allowing information to flow in the opposite way. Pyramid outperforms the previous methods in nested NER while addressing all the aforementioned problems with layered model. First, it can be used for more general overlapping NER. Second, it prevents layer disorientation as an l-length entity in the input is only predicted on layer l. Third, it mitigates the error propagation problem, as predictions in one layer do not dictate those in other layers. Our main contributions are as follows: • We propose a novel layered model called Pyramid for nested NER. The model recognizes entity mentions by its length without layer disorientation and error propagation. The proposed model can also address the more general overlapping NER task. • Besides the normal pyramid, we design an inverse pyramid to allow bidirectional interactions between neighboring layers. • We evaluate the proposed method on four datasets, namely ACE-2004 (Doddington et al., 2004), ACE-2005 (Walker et al., 2006), GENIA (Kim et al., 2003) and NNE (Ringland et al., 2019). The results suggest that our model significantly outperforms the previous methods, and achieves state-of-the-art performance with and without pre-trained language model embeddings (ALBERT (Lan et al., 2019), BERT (Devlin et al., 2019), and Flair (Akbik et al., 2018)). • Additionally, we construct a small dataset that contains overlapping but non-nested entities. Preliminary results on this dataset show the potential of our model for handling overlapping entities. 2 Related Work Existing approaches for recognizing nonoverlapping named entities usually treat the NER task as a sequence labeling problem. Various sequence labeling models achieve decent performance on regular NER, including probabilistic graph models such as Conditional Random Fields (CRF) (Ratinov and Roth, 2009), and deep neural networks like recurrent neural networks (RNN) and convolutional neural networks (CNN). Recently, LSTM-CRF has become a standard architecture for sequence labeling tasks. Huang et al. 2015 uses hand-crafted spelling features; Ma and Hovy 2016 uses CNN to capture character features; Lample et al. 2016 utilizes LSTM instead. These sequence labeling models can only detect non-overlapping entities and fail to handle nested ones. Nested NER has been intensively studied recently. Finkel and Manning 2009 proposes a CRFbased constituency parser and use a constituency tree to represent a sentence. Lu and Roth 2015 introduces the idea of hypergraph which allows edges to connect to multiple nodes to represent nested entities. Muis and Lu 2017 uses a multigraph representation and introduces the notion of mention separator for nested entity detection. Wang and Lu 2018 presents a neural segmental hypergraph model using neural networks to obtain distributed feature representation. Katiyar and Cardie 2018 also adopts a hypergraph-based formulation but instead uses neural networks to learn the structure. Lin et al. 2019 borrows the Anchor Region Networks (ARNs) architecture to predict nested entity 5920 mentions. All the above works design proprietary structures to explicitly capture nested entities. Layered models are common solution for nested NER. Alex et al. 2007 stacks multiple flat NER layers, where the first recognizes the innermost (or outermost) mentions, then the following taggers are used to incrementally recognize next-level mentions. Ju et al. 2018 dynamically stacks multiple flat NER layers and extract outer entities based on the inner ones. Fisher and Vlachos 2019 can also be considered as a layered model with a novel neural network architecture. Our method differs from the above layered models in that (1) it is able to handle overlapping NER, and (2) it does not suffer the layer disorientation or error propagation problem. Exhaustive region classification model enumerates all possible regions of the input sentence. Byrne 2007; Xu et al. 2017; Sohrab and Miwa 2018; Zheng et al. 2019 aggregate all possible adjacent tokens into potential spans. These spans, together with their left and right contexts, are fed into a classifier - a maximum entropy tagger (Byrne, 2007) or a neural network (Xu et al., 2017; Sohrab and Miwa, 2018; Zheng et al., 2019). Unfortunately, all these works fail to take advantage of the dependencies among nested entities, but perform prediction merely on individual text fragments, thus limiting the performance. Luan et al. 2019 uses propagation layers to capture relation and coreference between spans. Our method also potentially enumerates all possible spans, while maintaining the sequence structure, which leads to better performance. Pre-trained word embeddings, e.g. Glove (Pennington et al., 2014), have proved to be effective in improving NER performance. Recently, with the rapid development of language model techniques, the performance of NER models has been pushed to a new height. The recent pre-trained language model embeddings include ELMo (Peters et al., 2018), Flair (Akbik et al., 2018), BERT (Devlin et al., 2019), ALBERT (Lan et al., 2019), etc. In our experiments, we leverage these embeddings and observe significant performance improvements. 3 Proposed Method In this section, we describe the proposed model and its architecture, which includes an encoder, a pyramid, an inverse pyramid, and a logits layer. Figure 2 shows a toy model with a pyramid (5 bottom-up decoding layers in blue) and its inverse counterpart (5 top-down layers in pink). As shown in the blue pyramid, each decoding layer contains a convolutional network with a kernel of two to reduce the sequence length in its output, so that all possible mention spans can potentially be enumerated. The top-down inverse pyramid will be described later. We shall use the following notations: Embed the embedding layer LSTM the bidirectional LSTM layer LM the language model embedder Linear the fully-connected layer LayerNorm layer normalization The mentioned layers with the same notation, superscript and subscript share the same parameters. For the sake of brevity, we omit the dropout layer in this section. 3.1 The Input and Output The input is a T-length textual sentence. After the encoder, embedding sequences are recursively fed into flat NER decoding layers, producing L tag sequences in the IOB2-format1 with length T, T −1, ..., T −L + 1, where L is the number of decoding layers. Note we only label n-grams that are complete mentions, so I-{class} usually does not appear. Given the running example in Figure 1, input sentence “Former U.N. Ambassador Jeane Kirkpatrick ...” contains eight entity mentions, namely (U.N., ORG), (Ambassador, ROLE), (Jeane, FIRST), (Kirkpatrick, NAME), (U.N. Ambassador, ROLE), (Jeane Kirkpatrick, PER), (Former U.N. Ambassador, ROLE), and (Former U.N. Ambassador Jeane Kirkpatrick, PER). The output from the pyramid would contain layered tag sequences (l = 1, . . . , 5) as follows: l=5: B-PER ... l=4: O O ... l=3: B-PER O O ... l=2: O B-ROLE O B-PER ... l=1: O B-ORG B-ROLE B-FIRST B-NAME ... Unfortunately, the above layered sequences cannot include any entities of more than 5 tokens. Generally, a stack of L layers cannot predict entities containing more than L tokens! To address this issue, we propose a remedy solution: to predict all entities longer than L tokens on the topmost flat NER layer. Specifically, the bottom L −1 layers predict B-{class} tags for 1Label the first token of a mention as B-{class}; other tokens inside a mention as I-{class}; tokens outside any mention as O. 5921 PAD PAD PAD PAD PAD PAD PAD PAD inverse pyramid LSTM'dec LayerNorm' Dropout Conv1d' Dropout Concat decoding (layer 1) LSTMenc Embedding Layer (char, word) "The input sentence ..." Dropout Encoder decoding (layer 2) decoding (layer 3) decoding (layer 4) decoding (layer 5) inverse decoding (layer 1) inverse decoding (layer 2) inverse decoding (layer 3) inverse decoding (layer 4) Lineardec    (Logits Layer) (B, T, C) (B, T-1, C) (B, T-2, C) (B, T-3, C) (B, T-4, C) pyramid inverse pyramid Inverse Normal Bidirectional Language Model Concate Linearenc Conv1d' Concat zeros LSTMdec LayerNorm Dropout Conv1d Dropout decoding layer layer 1 layer 2 layer 3 layer 4 layer 5 pyramid t=0 t=1 t=2 t=3 t=4 t=5 t=6 t=0 t=1 t=2 t=3 t=4 t=5 t=6 inverse decoding layer The Network Detailed Structure Figure 2: Overview of a toy network with 5 decoding layers. The upper half shows the overall structure, while the lower half shows the details. B is the batch size; T represents the length of original text; C is the class number. complete entity mentions; and the topmost layer predicts both B-{class} and I-{class} tags. This stipulates that when two entities are nested, if one of them is longer than L, the other one cannot be longer than L −1. In the running example, suppose we had only 4 decoding layers (l = 1, . . . , 4), then the longest mention (Former U.N. Ambassador Jeane Kirkpatrick) would be recognized in the fourth decoding layer as following: l=4: B-PER I-PER ... l=3: B-PER O O ... l=2: O B-ROLE O B-PER ... l=1: O B-ORG B-ROLE B-FIRST B-NAME ... With the remedy solution, our model is able to handle entities longer than L. As most entity mentions are not too long (99% are no longer than 15 tokens), and it is even rarer for both two nested mentions to be longer than 15, we set the default number of flat decoder layers to L = 16 to minimize the impact of the remedy. Parameter L can be tuned for balance between accuracy and inference speed. 3.2 The Encoder We represent each word by concatenating character sequence embeddings and word embeddings. First, the character embeddings are dynamically generated by a LSTM (Lample et al., 2016) to capture the orthographic and morphological features of the word. It is suggested that with the introduction of character embeddings the model can better handle out-of-vocabulary (OOV) words. Second, the word embeddings are initialized with pre-trained word vectors. For OOV words, we randomly initialize an embedding for [UNK], which is tuned during training. The concatenated character and word embeddings are fed into a bidirectional LSTM encoding layer to further leverage contextual information. Formally, given the input sentence x: ˜xchar = LSTM char(Embedchar(x)) (1) ˜xword = Embedword(x) (2) ˜x = LSTM enc([˜xchar; ˜xword]) (3) For better performance, we adopt the popular pre-trained contextualized language model embeddings, such as BERT (Devlin et al., 2019). These embeddings are concatenated to the output of LSTM enc, followed by a linear layer to reduce the embedding dimension. i.e.: ˜x = Linearenc([˜x; LM(x)]) (4) 3.3 The Pyramid The pyramid recognizes entities in a bottom-up manner. It consists of L decoding layers, each of which corresponds to a flat named-entity recognizer. Each decoding layer has two main components, a LSTM and a CNN with a kernel of two. In layer l, the LSTM recognizes l-length entity mentions, and the CNN aggregates two adjacent hidden states and then feeds the text region embeddings enriched with layer information to the higher (l + 1-th) decoding layer. By passing through l decoding layers 5922 ACE-2004 ACE-2005 GENIA NNE train dev test train dev test train dev test train dev test sentences # total 6,198 742 809 7,285 968 1,058 15,022 1,669 1,855 43,457 1,989 3,762 # nested 2,718 (44%) 294 (40%) 388 (48%) 2,797 (38%) 352 (36%) 339 (32%) 3,222 (21%) 328 (20%) 448 (24%) 28,606 (66%) 1292 (65%) 2489 (66%) entities # total 22,195 2,514 3,034 24,700 3,218 3,029 47,006 4,461 5,596 248,136 10,463 21,196 # nested 10,157 (46%) 1,092 (43%) 1,417 (47%) 9,946 (40%) 1,191 (37%) 1,179 (39%) 8,382 (18%) 818 (18%) 1212 (22%) 20,6618 (83%) 8,487 (81%) 17,670 (83%) max length 57 35 43 49 31 27 20 20 15 16 15 15 Table 1: Statistics of the datasets used in the experiments. A sentence is considered nested if any two mentions in it are nested. An entity mention is considered nested if it contains any mention or is contained by any mention. with l −1 CNNs, each hidden state (at t) actually represents the region of l original tokens (from t to t + l −1). Therefore, the l-th decoding layer enumerates text spans of length l. And all these L layers together produce all possible entity spans. One may notice that the pyramid structure intrinsically provides useful inductive bias: The higher the layer, the shorter the input sequence, forcing the model to capture high-level information for predicting long entities and low-level information for predicting short entities. Moreover, as the length of each span representation is reduced to one on its target decoding layer, the prediction task on each layer is simple and clear - to predict entities whose representation length is one in this layer. Since the input of the first decoding layer is from the encoder while the others are from the output of their lower neighboring layers, the input bias and scale may differ among layers. This is detrimental to training. To address this issue, we apply layer normalization (Ba et al., 2016) before feeding the region embeddings into the decoding LSTM. Let ˜x1 = ˜x, for each decoding layer l: hl = LSTM dec(LayerNorm(˜xl)) (5) ˜xl+1 = Conv1d(hl) (6) 3.4 The Inverse Pyramid Each decoding layer in the bottom-up pyramid takes into account layer information from lower layers. However, a layer cannot get feedback from its higher neighbors, which could potentially help. Moreover, for long entities, their embeddings need to go through numerous lower layers and tend to lose important information. Therefore, we add an inverse pyramid, which recognizes entity mentions in a top-down manner, to address the above issues. While in the pyramid, sequences pass through a CNN to reduce sequence length before being fed into the higher decoding layer, in the inverse pyramid, however, we use another CNN with zero paddings and a kernel of two to reconstruct the lower-level text region embeddings. Specifically, to reconstruct the text region embeddings at the l −1-th decoding layer, we concatenate the hidden states of the l-th normal and inverse decoding layers, and feed it to the inverse CNN (see bottom-left pink box in Figure 2). There are two benefits for using the top-down inverse pyramid: (1) It gives the feedback from higher decoding layers, allowing bidirectional interaction between neighboring decoding layers; (2) Since the inverse pyramid needs to reconstruct lower-level sequence, it requires the pyramid to retain as much original information as possible, thereby mitigating the information loss for long entities. Formally we have the following output from the inverse decoding layers: h′ l = LSTM ′dec(LayerNorm′(˜x′ l)) (7) ˜x′ l−1 = Conv1d′([hl; h′ l]) (8) For the top inverse decoding layer, we cannot compute h′ L, so we use zeros instead. Finally, with the concatenation of the hidden states of both the normal and inverse decoding layers, we use a feed-forward layer to predict their class: logitsl = Lineardec([hl; h′ l]). (9) 4 Experiment 4.1 Datasets We evaluate our model on four nested entity recognition corpora: ACE-2004 (Doddington et al., 2004), ACE-2005 (Walker et al., 2006), GENIA (Kim et al., 2003), and NNE (Ringland et al., 2019). For ACE-2004 and ACE-2005, we adopt the train/dev/test split of Lu and Roth 20152, as 2https://statnlp-research.github.io/ publications/ 5923 Setting Value batch size 32,32,64,32 optimizer SGD momentum 0.9 learning rate (lr) 0.01 dropout rate 0.3,0.4,0.4,0.2 hidden dim 200 # stacked layers 16 token emb dim 100,100,200,100 char emb dim 30,30,60,30 gradient clipping 5.0 Table 2: Hyperparameters used in our experiments. If 4 values are given, they correspond to ACE-2004, ACE2005, GENIA and NNE respectively. used in most previous studies. For GENIA, we use GENIAcorpus3.02p3, and follow the train/dev/test split of previous works (Finkel and Manning, 2009; Lu and Roth, 2015) i.e.: (1) split first 81%, subsequent 9%, and last 10% as train, dev and test set, respectively; (2) collapse all DNA, RNA, and protein subtypes into DNA, RNA, and protein, keeping cell line and cell type, and (3) removing other entity types, resulting in 5 entity types. For NNE, we keep the original dataset split and pre-processing. The statistics of each dataset are shown in Table 1. 4.2 Training Details We denote by Pyramid-Basic the model using the normal bottom-up pyramid only; and by Pyramid-Full the one with both the normal and inverse pyramids. We try to use as similar settings as possible on all datasets, and Table 2 describes the settings used in our experiments. For the word embeddings, we use 100-dimensional GloVe word embeddings trained on 6B tokens4 as initialization. We disable updating the word embeddings during training. Besides, characterbased embeddings are generated by a LSTM (Lample et al., 2016). We set the hidden dimension to 200 (100 for each direction in bidirectional LSTM). We use inverse time learning rate decay: ˆlr = lr/(1+decay rate∗steps/decay steps), with decay rate 0.05 and decay steps 1000. All results are averaged on 4 runs to ensure reproducibility. The GENIA corpus significantly differs from the others in its distribution, as it belongs to medical domain. So for GENIA, we initialize word embeddings with word vectors pre-trained on biomedical 3http://www.geniaproject.org/ genia-corpus/pos-annotation 4https://nlp.stanford.edu/projects/ glove/ corpus (Chiu et al., 2016)5, which are in 200 dimensions. We also evaluate our method with pre-trained language model embeddings: • [Flair] (Akbik et al., 2018): Pre-trained contextualized character-level embeddings. Here, we use the concatenation of news-forward and news-backward, forming embeddings of dimension 4096. For GENIA, we use pubmed-forward and pubmed-backward. • [BERT] (Devlin et al., 2019): Transformer based pre-trained contextual word embeddings. Here we use the bert-large-uncased checkpoint, with embeddings of dimension 1024. For each token, we generate the contextualized word embedding by averaging all BERT subword embeddings in the last four layers without fine-tuning. For GENIA, we use BioBERT v1.1 (Lee et al., 2020)6. • [ALBERT] (Lan et al., 2019): A lite BERT with shared transformer parameters. Here we use the albert-xxlarge-v2 checkpoint, with embeddings of dimension 4096. For each token, we average all ALBERT subword embeddings in the last four layers without finetuning. We generate Flair embeddings with the library provided by Akbik et al. 20197. We use the implementation by Wolf et al. 20198 to generate BERT and ALBERT embeddings. With pre-trained contextualized embeddings, the model is more prone to overfitting. So we increase the dropout rate by 0.05 for these settings. 4.3 Results of Comparison Table 3 presents the comparison of our model with existing methods. Our method outperforms all previous methods by a large margin. With conventional word embeddings, our method achieves 80.27, 79.42, 77.78, and 93.70 in terms of F1-score, 5https://github.com/cambridgeltl/ BioNLP-2016 6https://github.com/naver/ biobert-pretrained 7https://github.com/zalandoresearch/ flair 8https://github.com/huggingface/ transformers 5924 ACE-2004 ACE-2005 GENIA NNE Model P R F1 P R F1 P R F1 P R F1 Finkel and Manning 2009 75.4 65.9 70.3 Lu and Roth 2015 70.0 56.9 62.8 66.3 59.2 62.5 74.2 66.7 70.3 Muis and Lu 2017 72.7 58.0 64.5 69.1 58.1 63.1 75.4 66.8 70.8 Xu et al. 2017 68.2 54.3 60.5 67.4 55.1 60.6 Katiyar and Cardie 2018 73.6 71.8 72.7 70.6 70.4 70.5 79.8 68.2 73.6 Ju et al. 2018 74.2 70.3 72.2 78.5 71.3 74.7 Wang et al. 2018 74.9 71.8 73.3 74.5 71.5 73.0 78.0 70.2 73.9 77.4 70.1 73.6 Wang and Lu 2018 78.0 72.4 75.1 76.8 72.3 74.5 77.0 73.3 75.1 91.8 91.0 91.4 Sohrab and Miwa 2018 93.2 64.0 77.1 Fisher and Vlachos 2019 75.1 74.1 74.6 Lin et al. 2019 76.2 73.6 74.9 75.8 73.9 74.8 Strakov´a et al. 2019 77.1 75.4 76.4 Pyramid-Basic 80.83 78.86 79.83 79.27 79.37 79.32 77.91 77.20 77.55 93.37 93.91 93.64 Pyramid-Full 81.14 79.42 80.27 80.01 78.85 79.42 78.60 77.02 77.78 93.44 93.95 93.70 LM-based Xia et al. 2019 [ELMO] 81.7 77.4 79.5 79.0 77.3 78.2 Fisher and Vlachos 2019 [ELMO] 79.7 78.0 78.9 Fisher and Vlachos 2019 [BERT] 82.7 82.1 82.4 Shibuya and Hovy 2019 [BERT] 83.0 82.4 82.7 76.3 74.7 75.5 Luan et al. 2019 [ELMO] 84.7 82.9 76.2 Strakov´a et al. 2019 [BERT] 84.3 83.4 78.2 Strakov´a et al. 2019 [BERT+Flair] 84.4 84.3 78.3 Pyramid-Basic [BERT] 86.08 86.48 86.28 83.95 85.39 84.66 79.45 78.94 79.19 93.97 94.79 94.37 Pyramid-Basic [BERT+Flair] 87.01 86.55 86.78 84.90 86.08 85.49 79.98 78.51 79.24 93.97 94.98 94.47 Pyramid-Basic [ALBERT] 86.54 87.44 86.99 85.20 86.56 85.87 80.07 77.60 78.82 94.11 94.91 94.51 Pyramid-Basic [ALBERT+Flair] 86.63 87.15 86.89 85.10 87.22 86.15 78.48 79.39 78.93 94.18 94.79 94.48 Pyramid-Basic [ALBERT+BERT] 87.65 87.74 87.70 85.24 87.32 86.27 80.12 77.82 78.95 94.28 94.99 94.63 Pyramid-Full [BERT+Flair] 80.31 78.33 79.31 Pyramid-Full [ALBERT+BERT] 87.71 87.78 87.74 85.30 87.40 86.34 94.30 95.07 94.68 Table 3: Results of nested NER. Ju et al. 2018 used different dataset split. Strakov´a et al. 2019 introduces two methods, here we report the better one. Bold and underline indicate the best and the second best F1 respectively. even compatible with some LM-based baselines. A close one is from Strakov´a et al. 2019, which employs many extra features including input forms, lemmas and POS, whereas our method does not. Additionally, our method brings much higher recall values than the other methods. With pre-trained language model embeddings, specifically with ALBERT+BERT for ACE-2004, ACE-2005, NNE and with BERT+Flair for GENIA, our model achieves state-of-the-art F1 scores: 87.74, 86.34, 79.31, and 94.68 respectively. 4.4 Tuning Number of Layers We evaluate our method with different L on all datasets. Due to space limit, we only present the results of ACE-2005 in Table 4. The findings on the other datasets are similar. Results From All Layers We report in Table 4 the detailed results for all entity lengths while tuning L on ACE-2005. Obviously 1-word and 2-word entities account for the majority of entities (77%), where we achieve competitive results. Longer entities see reductions in performance. However, due to our remedy strategy, entities longer than L are still recognized with acceptable performance. Note R(N) is the recall of nested entities, i.e. for layer l, entities nested with other entities shorter than l are also counted in. Inference Speed Table 4 also shows the inference speed with different L for the basic and full models. Although the basic model does not perform as good as the full model, it is significantly faster. Since the time complexity of our method is O(TL) with T being the number of tokens and L the number of stacked layers, we can further speed up the inference by using smaller L value (e.g. L = 8 or 4), while achieving F1 scores higher than most baselines. 4.5 Ablation Study We conduct ablation study to verify the effectiveness of components of Pyramid. Likewise, we only present the results on ACE-2005 here. Character Embeddings: Using character is a standard technique for NER to dynamically capture orthographic and morphological features. It provides some improvements. Layer Normalization: LayerNorm eliminates the bias and scale difference of the inputs of each 5925 Pyramid-Basic L = 32 L = 16 L = 8 L = 4 len(e) # entities F1 R(N) F1 R(N) F1 R(N) F1 R(N) all 79.3 73.6 79.3 74.4 78.8 73.9 77.6 69.5 1 1706 (56%) 84.0 82.3 84.3 82.5 84.0 83.0 83.4 81.4 2 635 (21%) 79.3 77.5 79.7 78.6 78.8 77.7 78.6 76.2 3 248 (8%) 74.9 75.5 75.3 76.8 75.6 77.5 72.9 73.7 4 140 (5%) 72.1 73.1 71.8 75.0 72.0 73.3 65.7 61.1 5 90 (3%) 73.6 77.5 72.3 78.9 69.3 75.5 63.6 60.3 6-8 106 (3%) 57.9 59.3 56.2 59.3 53.4 56.7 47.7 45.9 9-16 81 (3%) 42.0 36.4 43.1 39.9 42.3 39.5 40.0 36.8 17- 25 (1%) 33.8 26.1 23.0 18.8 27.2 21.7 23.6 18.8 Inference Speed (Basic/Full, words per second) on GTX 1080 Ti batch size = 1 708 / 445 842 / 545 1116 / 781 1494 / 1153 batch size = 4 1526 / 955 2085 / 1361 2987 / 2151 4230 / 3280 batch size = 16 2949 / 2084 4372 / 3282 6660 / 5169 8999 / 7852 Table 4: Details of tuning L on ACE-2005. len(e) is the length of entities. R(N) is the recall of nested entities. Numbers below the horizontal lines indicate the results where the remedy solution starts working Pyramid-Basic P R F1 CharEmbs with 79.27 79.37 79.32 without 79.54 77.67 78.59 (-0.73) LayerNorm with 79.27 79.37 79.32 without 79.17 78.01 78.59 (-0.73) LSTMdec shared weights 79.27 79.37 79.32 independent 78.19 78.75 78.47 (-0.85) ReduceLength Conv1d 79.27 79.37 79.32 MeanPooling 79.18 77.77 78.47 (-0.85) MaxPooling 79.69 77.47 78.56 (-0.76) Table 5: Ablation study on ACE-2005 decoding layer and improve the F1 score. It also substantially accelerates the converging speed. Sharing LSTMdec: The jobs of decoding layers are similar: inheriting information from previous layers and recognizing entity representations of length one. Therefore, sharing weights maximizes the use of training data and prevents overfitting. Method of Reducing Length: We use CNN to reduce the sequence length at each decoding layer. As shown in Table 5, compared with average pooling and maximum pooling, CNN can effectively retain the original semantic information and capture the boundary information. Pyramid Layers: Apart from the results shown in Table 5, we emphasize that the performance gain of Pyramid owes a lot to the pyramid layers (both normal and inverse ones). As shown in Table 4, reducing L to 4 leads to a drop of F1 (-1.7). It is clear that when L = 1, our method degrades to a flat entity recognizer, which cannot handle nested mentions any more. Dataset Statistics train dev test Sentences # total 1599 400 600 # nested 1594 400 600 # overlap 230 54 75 Entities # total 16202 3978 5989 # nested 14506 3597 5390 # overlap 511 115 164 Pyramid-Basic P R F1 overall 87.5 86.9 87.2 nested 87.4 overlap 70.1 Table 6: Results of Pyramid-Basic with nested and overlapping entities. The dataset is based on part of NNE, with additional program-generated labels. 4.6 Overlapping Entity Recognition Overlapping mentions usually occur along with the attributive clause in natural language. For example, sentence “The burial site of Sheikh Abbad, who died 500 years ago, is located.” contains two overlapping mentions “The burial site of Sheikh Abbad” and “Sheikh Abbad, who died 500 years ago”. Due to lack of datasets for overlapping NER, we create a small dataset. For all sentences in NNE, we find 2599 which contain “, which” or “, who”. We use the ELMo-based constituency parser9 to find attributive clauses together with their modified noun phrases (“Sheikh Abbad, who ...”), and then see if a bigger noun phrase (“the burial site of Sheikh Abbad”) contains the noun phrase. Next, while keeping the original annotations, we add these two mentions to form a new dataset where around 14% sentences have overlapping but non-nested entity mentions. This dataset is split randomly into training, dev, and test sets containing 1599, 400, and 600 sentences respectively. Note the additional annotations are not verified by human, meaning they might contain some errors. However, it is still useful for testing the performance of our model for overlapping NER. The statistics of the data and the experimental results are shown in Table 6. It can be seen that our method can effectively handle overlapping NER. 5 Conclusion This paper presented Pyramid, a novel layered neural model for nested entity recognition. Our model relies on a layer-wise bidirectional decoding process (with both normal and inverse pyramids), 9Stern et al. 2017 with ELMo: https: //allennlp.s3.amazonaws.com/models/ elmo-constituency-parser-2018.03.14.tar. gz, implemented by Gardner et al. 2018. 5926 allowing each decoding layer to take into account the global information from lower and upper layers. Pyramid does not suffer from layer disorientation or error propagation, and is applicable for the more general overlapping NER. The proposed method obtained state-of-the-art results on four different nested NER datasets, confirming its effectiveness. Acknowledgments This work was supported by the Natural Science Foundation of China (No. 61672455), the Key Research and Development Program of Zhejiang Province of China (No. 2020C01024), the Natural Science Foundation of Zhejiang Province of China (No. LY18F020005), and the National Research Foundation, Prime Minister’s Office, Singapore under its Strategic Capability Research Centres Funding Initiative. References Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. 2019. Flair: An easy-to-use framework for state-of-the-art nlp. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54– 59. Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638–1649. Beatrice Alex, Barry Haddow, and Claire Grover. 2007. Recognising nested named entities in biomedical text. In Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing, pages 65–72. Association for Computational Linguistics. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Kate Byrne. 2007. Nested named entity recognition in historical archive text. In International Conference on Semantic Computing (ICSC 2007), pages 589– 596. IEEE. Billy Chiu, Gamal Crichton, Anna Korhonen, and Sampo Pyysalo. 2016. How to train good word embeddings for biomedical nlp. In Proceedings of the 15th workshop on biomedical natural language processing, pages 166–174. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. George R Doddington, Alexis Mitchell, Mark A Przybocki, Lance A Ramshaw, Stephanie M Strassel, and Ralph M Weischedel. 2004. The automatic content extraction (ace) program-tasks, data, and evaluation. In Lrec, volume 2, page 1. Lisbon. Jenny Rose Finkel and Christopher D Manning. 2009. Nested named entity recognition. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 141–150. Joseph Fisher and Andreas Vlachos. 2019. Merge and label: A novel neural network architecture for nested ner. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5840–5850. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1–6. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1446–1459. Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 861–871. J-D Kim, Tomoko Ohta, Yuka Tateisi, and Jun’ichi Tsujii. 2003. Genia corpus—a semantically annotated corpus for bio-textmining. Bioinformatics, 19(suppl 1):i180–i182. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 5927 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234–1240. Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2019. Sequence-to-nuggets: Nested entity mention detection via anchor-region networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5182–5192. Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 857– 867. Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3036–3046. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074. Aldrian Obaja Muis and Wei Lu. 2017. Labeling gaps between words: Recognizing overlapping mentions with mention separators. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2608–2618. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL-HLT, pages 2227–2237. Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 147–155. Nicky Ringland, Xiang Dai, Ben Hachey, Sarvnaz Karimi, Cecile Paris, and James R Curran. 2019. Nne: A dataset for nested named entity recognition in english newswire. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5176–5181. Takashi Shibuya and Eduard Hovy. 2019. Nested named entity recognition via second-best sequence learning and decoding. arXiv preprint arXiv:1909.02250. Mohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2843–2849. Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A minimal span-based neural constituency parser. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 818–827. Jana Strakov´a, Milan Straka, and Jan Hajic. 2019. Neural architectures for nested ner through linearization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326–5331. Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia, 57. Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 204–214. Bailin Wang, Wei Lu, Yu Wang, and Hongxia Jin. 2018. A neural transition-based model for nested mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1011–1017. Thomas Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, T Rault, R Louf, M Funtowicz, et al. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. Congying Xia, Chenwei Zhang, Tao Yang, Yaliang Li, Nan Du, Xian Wu, Wei Fan, Fenglong Ma, and S Yu Philip. 2019. Multi-grained named entity recognition. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1430–1440. Mingbin Xu, Hui Jiang, and Sedtawut Watcharawittayakul. 2017. A local detection approach for named entity recognition and mention detection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1237–1247. Changmeng Zheng, Yi Cai, Jingyun Xu, Ho-fung Leung, and Guandong Xu. 2019. A boundary-aware neural model for nested named entity recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 5928 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 357– 366.
2020
525
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5929–5939 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5929 ReInceptionE: Relation-Aware Inception Network with Joint Local-Global Structural Information for Knowledge Graph Embedding Zhiwen Xie1, Guangyou Zhou2∗, Jin Liu1∗, Jimmy Xiangji Huang3 1School of Computer Science, Wuhan University 2School of Computer Science, Central China Normal University 3School of Information Technology, York University [email protected], [email protected] [email protected], [email protected] Abstract The goal of Knowledge graph embedding (KGE) is to learn how to represent the lowdimensional vectors for entities and relations based on the observed triples. The conventional shallow models are limited to their expressiveness. ConvE (Dettmers et al., 2018) takes advantage of CNN and improves the expressive power with parameter efficient operators by increasing the interactions between head and relation embeddings. However, there is no structural information in the embedding space of ConvE, and the performance is still limited by the number of interactions. The recent KBGAT (Nathani et al., 2019) provides another way to learn embeddings by adaptively utilizing structural information. In this paper, we take the benefits of ConvE and KBGAT together and propose a Relation-aware Inception network with joint local-global structural information for knowledge graph Embedding (ReInceptionE). Specifically, we first explore the Inception network to learn query embedding, which aims to further increase the interactions between head and relation embeddings. Then, we propose to use a relation-aware attention mechanism to enrich the query embedding with the local neighborhood and global entity information. Experimental results on both WN18RR and FB15k-237 datasets demonstrate that ReInceptionE achieves competitive performance compared with state-of-the-art methods. 1 Introduction Knowledge graphs (KGs) are at the core of most state-of-the-art natural language processing solutions and have been spotlighted in many real-world applications, including question answering (Hao et al., 2017), dialogue generation (He et al., 2017; Madotto et al., 2018) and machine reading comprehension (Yang and Mitchell, 2017). Typically, KGs ∗Corresponding author. are directed graphs whose nodes denote the entities and edges represent the different relations between entities. The structured knowledge in KGs is organized in the form of triples (h, r, t), where h and t stand for the head and tail entities respectively, and r represents the relation from h to t. Although large-scale KGs (e.g., Freebase (Bollacker et al., 2008), DBpedia (Lehmann et al., 2015)) have already contained millions or even billions of triples, they are still far from complete since the emerging new knowledge appears. Knowledge graph embedding (KGE) is an effective solution to solve the incompletion problem. KGE aims to learn the low-dimensional vectors (embeddings) for entities and relations based on the observed triples in KGs. Conventional models including TransE (Bordes et al., 2013) and its numerous extensions (e.g., TransD (Ji et al., 2015), TransR (Lin et al., 2015), DistMul (Yang et al., 2015), ComplEx (Trouillon et al., 2016), etc.) have been proposed. These shallow models are limited to their expressiveness (Dettmers et al., 2018). Recently, CNN-based methods have been proposed to capture the expressive features with parameter efficient operators. ConvE (Dettmers et al., 2018) takes advantage of CNN and uses convolution filters on 2D reshapings of the head entity and relation embeddings. Through this, ConvE can increase the interactions between head and relation embeddings. Empirical results have proved that increasing the number of interactions is beneficial to the KGE task, but ConvE is still limited by the number of interactions (Jiang et al., 2019; Vashishth et al., 2020). Furthermore, ConvE does not consider the structural information. In contrast, graph-based methods are effective to aggregate neighborhood information to enrich the entity/relation representation (Schlichtkrull et al., 2018; Bansal et al., 2019; Nathani et al., 2019). Among them, KB5930 Figure 1: An example of relation-aware local and global information (left) and the general framework of our proposed ReInceptionE (right). GAT (Nathani et al., 2019) achieves state-of-the-art performance on various benchmark datasets via using graph attention networks (GAT) (Velickovic et al., 2018). KBGAT learns embeddings for every entity by taking all possible relations into account, which requires multiple hops of reasoning. In contrast, it can be beneficial to learn embeddings from a query-relevant subgraph of the local neighborhood and global entities. As an example shown in Figure 1, given a query (Jack London, nationality, ?) for Jack London, we can gather the relationaware local neighbor (place lived, Okaland). The local neighbor allows us to project Jack London into the Okaland region of the embedding space, which can lead to a high score for predicting the target America, as Okaland and America are close in embedding space. Besides, we also note that a specific relation can be acted as the “bridge” to link the related entities. Considering the relation nationality, the related head entities { Kaneto Shiozawa, Shammi Kapoor, Will Smith, · · · } and tail entities { America, Canada, Japan, · · · } tend to be a set of person names and countries. These related entities act as a strong signal to judge whether a triple is valid or not. Based on the above observations, we take the benefits of ConvE and KBGAT together and propose a Relation-aware Inception network with joint local-global structural information for knowledge graph Embedding, and we name it ReInceptionE. In ReInceptionE, we first adapt Inception network (Szegedy et al., 2015, 2016) – a high performing convolutional neural network with carefully designed filters, to increase the interactions using multiple convolution filters with different scales, while at the same time to keep parameter efficient. Then, we construct a local neighborhood graph and a global entity graph by sharing the head and relation respectively for a given query. With the constructed graphs, we apply a relation-aware attention mechanism to aggregate the local neighborhood features and gather the global entity information to enrich the head/relation representation. Finally, we aggregate the joint local-global structural information using a fully connected layer to predict the missing links. In summary, we make the following three contributions: (1) It is the first to explore Inception network to learn query embedding which aims to further increase the interactions between head and relation embeddings; (2) We propose to use a relation-aware attention mechanism to enrich the query embedding with the local neighborhood and global entity information; (3) We conduct a series of experiments to evaluate the performance of the proposed method. Experimental results demonstrate that our method obtains competitive performance in comparison to these state-of-the-art models on both WN18RR and FB15k-237. The rest of this paper is structured as follows. 5931 Section 2 describes our proposed method for KGE. In Section 3, the experimental results are presented. We make a conclusion in Section 4. 2 Our Approach In this section, we first describe the background and definition in Subsection 2.1, and Inception-based query encoder in Subsection 2.2. Then, we introduce the relation-aware local attention and global attention in Subsection 2.3 and 2.4, respectively. Finally, we describe the joint using of them in Subsection 2.5. 2.1 Background and Definition Definition 3.1 Knowledge Graph G: A knowledge graph G = {(h, r, t)|(h, r, t) ∈E×R×E} denotes a collection of triples, where E and R indicate entities and relations, respectively, h, t ∈E represent the head entity and tail entity, and r ∈R denotes the specific relation linking from the head entity h to tail entity t. Definition 3.2 Knowledge Graph Embedding: Knowledge graph embedding aims to learn embeddings of entities and relations with the valid triples in G, and then predict the missing head entity h given query (?, r, t) or tail entity t given query (h, r, ?) with the learned entity and relation embeddings. The framework of the proposed ReInceptionE is shown in Figure 1 (right). ReIncetionE consists of four modules: (1) Inception-based query encoder (InceptionE), which is used to transform the input query q = (h, r, ?) into a k-dimensional vector vq; (2) relation-aware local attention and (3) relation-aware global attention are used to capture the local neighborhood information and the global entity information; and (4) joint relation-aware attention is used to aggregate the different structural information using a fully connected layer. Finally, we compute the score for the given triple (h, r, t) based on the query embedding and the tail entity embedding. 2.2 Inception-Based Query Encoder ConvE (Dettmers et al., 2018) is the first model to apply CNN for KGE, which uses 2D convolution operation to model the head and relation in a query. However, ConvE is limited by the number of interactions between the head and relation embeddings (Jiang et al., 2019; Vashishth et al., 2020). In this paper, we propose to employ the Inception network Figure 2: The structures of ConvE (left) and the proposed Inception-based query encoder (right). The red squares denote the slide windows of convolution filters. (Szegedy et al., 2015, 2016), a high performing convolutional neural network with carefully designed filters, to increase the interactions by taking the head and relation as two channels of the input. Figure 2 shows the differences between InceptionE (right) and ConvE (left). Obviously, ConvE cannot capture full interactions between the head and relation embeddings since the convolution operations in ConvE only slides on the entity or relation 2D matrices independently. On the contrary, InceptionE can increase the interactions between the head and relation embeddings using multiple convolution filters with different scales, while at the same time keep parameter efficient. As shown in Figure 2, given a query q = (h, r, ?), we first reshape the head and relation embeddings as 2D matrices denoted as vh and vr. Then, the 2D embeddings are viewed as two channels of the input for the Inception network. Thus, the entries at the same dimension of vh and vr are aligned over the channel dimension, which enables the convolution operations to increase the interactions between the head and relation embeddings. Specifically, We first use 1 × 1 convolutions to capture the direct interactions at the same dimension, which can be formulated as: v1×1 = Relu([vh||vr] ∗ω1×1) (1) where Relu (Glorot et al., 2011) is a non-linear activation function, || denotes the concatenation operation, ∗denotes the convolutional operation and 5932 ω1×1 is the parameter of convolution filters with 1 × 1 size, v1×1 denotes the interaction features of the first 1 × 1 convolutional layer. Then, filters with different sizes, such as 2 × 2 and 3 × 3, are applied to capture high-level interaction features in various scales. Thus, we can get interaction features of the 2 × 2 and 3 × 3 convolutional layers, denoted by v2×2 and v3×3, respectively. As suggested in (Szegedy et al., 2016), we use two 3 × 3 convolutions instead of a 5 × 5 convolution to capture interaction features in larger spatial filters, which is able to reduce the number of parameters. The two 3 × 3 convolutions are denoted as: v2(3×3) = Relu(Relu(v2(3×3) 1×1 ∗ω1 3×3)∗ω2 3×3) (2) where v2(3×3) 1×1 is the input interaction features, ω1 3×3 and ω2 3×3 are parameters of the two 3 × 3 convolution layers. Finally, the output interaction features with different scales and levels are concatenated and a fully connected layer is applied to obtain the embedding of the given query. Formally, we define the Inception-based query encoder model as: vq = Inception(vh, vr) = Relu(vec([v1×1||v2×2||v3×3||v2(3×3)])W) (3) where W is the parameter of the fully connected layer. 2.3 Relation-Aware Local Attention KBGAT learns embedding for every entity by taking all possible relations into account, and the embedding learning is impaired by the irrelevant neighbors. In contrast, it can be beneficial to learn embedding from a query-relevant neighborhood graph. In this subsection, we first construct a relation-aware neighborhood graph and then apply an attention mechanism to aggregate local graph structure information. For the query q = (h, r, ?), we denote its neighbors as Nq = {ni = (ei, ri)|(ei, ri, h) ∈G}. Note that, for each triple (h, r, t), we create an inverse triple (t, r−1, h), which has also been used in (Lacroix et al., 2018; Dettmers et al., 2018). Thus, query (?, r, t) can be converted to (t, r−1, ?). And the neighbors {(rj, ej)|(h, rj, ej) ∈G} for head entity h can be converted to a format of {(ej, r−1 j )|(h, rj, ej) ∈G}. Thus, Nq contains both the outgoing and incoming neighbors for a query q = (h, r, ?). Each neighbor ni = (ei, ri) ∈Nq is also a query with a head entity ei and a relation ri. Thus, each entity and relation in neighbor ni = (ei, ri) can be encoded using the Inception-based query encoder: vni = Inception(vei, vri) (4) where vei and vri are the 2D embedding vectors of entity ei and relation ri. In practice, different neighbors may have different impacts for a given query. It is useful to determine the importance of each neighbor for a specific query. As an example in Figure 1, for the query (Jack London, nationality, ?), it is reasonable to focus on the the neighbors related to the relation nationality, such as (Jack London, place lived, Oakland). To this end, we use relation-aware attention mechanism to assign different importance for each neighbor and compute the relevant score for each neighbor using a non-linear activation layer: si = LeakyRelu(W1[W2vq||W3vni]) (5) where W1, W2 and W3 are parameters to be trained and LeakyRelu (Maas et al., 2013) is the activation function. We then normalize the relevant scores for different neighbors using a softmax function to make it comparable across the neighbors, which is denoted as: αi = exp(si) P nj∈Nq exp(sj) (6) Finally, we aggregate the neighborhood information according to their attention scores and apply a non-linear function to obtain the neighborhood vector. To keep more information of the original query embedding, we also apply a residual operation: vn = Relu  X ni∈Nq αiW3vni  + W2vq (7) For simplification, we denote the above relationaware attention operations as: vn = ReAtt(Vn, vq) (8) where Vn = {vni|ni ∈Nq} is a set of local neighobrhood vectors. 5933 2.4 Relation-Aware Global Attention The number of relation-aware local neighbors for each node (entity) varies from one to another, making the neighbor graph very sparse. The sparse nature would affect the accuracy of the embedding. In fact, a specific relation can be acted as the “bridge” to link the related entities. In this subsection, we construct a relation-aware head graph and tail graph by gathering all entities for relation r in the given query q = (h, r, ?). Intuitively, all head entities for relation r share some common type information. And the tail entities for relation r contain some implicit information about the type of the target entity t. For example in Figure 1, given the relation nationality, all heads { Kaneto Shiozawa, Shammi Kapoor, Will Smith, · · · , } and tails { America, Canada, Japan, · · · , } are the names of a person and a country, sharing the similar entity types. These relation-aware global heads and tails can provide some useful information for the KGE task. Thus, we construct relation-aware global head and tail graphs according to the head and tail entities of the relation. Let Hr = {ei|(ei, r, ej) ∈G} and Tr = {ej|(ei, r, ej) ∈G} denote a set of head and tail entities for relation r, respectively. For each head entity hri ∈Hr, we first represent it as an embedding vector vhri. Then, we use relation-aware attention mechanism to capture the relevant information from all the relation-aware head entities, which is denoted as: vrh = ReAtt(Vrh, vq) (9) where Vrh = {vhri|hri ∈Hr} is a set of entity vectors for relation-aware global entities. Similarly, we use relation-aware attention mechanism to capture global tail informations, which is computed as: vrt = ReAtt(Vrt, vq) (10) where Vrt = {vtri|tri ∈Tr} is a set of entity embeddings for relation-aware global tails. 2.5 Joint Relation-Aware Attention Once obtained the relation-aware local neighborhood information vn and global head and tail vectors vht and vrt, we concatenate these vectors and merge them by using a linear feed-forward layer: v′ q = W4[vn||vrh||vrt] + b (11) Dataset WN18RR FB15k-237 #Entities 40,943 14,541 #Relations 11 237 #Training 86,835 141,442 #Validation 3,034 17,535 #Test 3,134 20,466 Table 1: Statistics of the datasets. where W4 and b are the parameters of the feedforward layer. Finally, we compute the score for each triple (h, r, t) by applying a dot product of the query embedding v′ q and the tail embedding vt: f(h, r, t) = v′T q vt (12) To optimize the parameters in our model, we compute the probability of the tail t using a softmax function: p(t|h, r) = exp(λf(h, r, t)) P (h,r,t′)∈G′∪{(h,r,t)} exp(λf(h, r, t′)) (13) where λ is a smoothing parameter, and G′ is a set of invalid triples created by randomly replacing the tail t with an invalid entity t′. We train the model by minimizing the following loss function: L = −1 |E| |E| X i=0 log p(ti|hi, ri) (14) where (hi, ri, ti) ∈G is a valid triple, and |E| is the number of valid triples in G. 3 Experiments 3.1 Experimental Setup Datasets: We conduct experiments for KGE on two widely used public benchmark datasets : WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova et al., 2015). WN18RR is a subset of WN18 (Bordes et al., 2013) while FB15k-237 is a subset of FB15k (Bordes et al., 2013). Since WN18 and FB15k contain a large number of inverse relations, making the triples in the test set can be obtained simply by inverting triples in the training set. To address the above problem, both WN18RR (Dettmers et al., 2018) and FB15k-237 (Toutanova et al., 2015) are generated by removing the inverse relations from WN18 and FB15k. In recent two 5934 Models WN18RR FB15k-237 MR MRR Hits@10 MR MRR Hits@10 TransE (Bordes et al., 2013)* 2300 0.243 0.532 323 0.279 0.441 DistMult (Yang et al., 2015)* 5110 0.430 0.490 512 0.281 0.446 ComplEx (Trouillon et al., 2016)* 5261 0.440 0.510 546 0.278 0.450 R-GCN+ (Schlichtkrull et al., 2018) 0.249 0.417 CACL (Oh et al., 2018) 3154 0.472 0.543 235 0.349 0.487 ConvE (Dettmers et al., 2018) 4187 0.430 0.520 244 0.325 0.501 NKGE (Wang et al., 2018) 4170 0.450 0.526 237 0.330 0.510 TransMS (Yang et al., 2019) 6523 0.460 249 0.445 AnyBURL (Meilicke et al., 2019) 0.470 0.552 0.310 0.486 SACN (Shang et al., 2019) 0.470 0.540 0.350 0.540 A2N (Bansal et al., 2019) 0.450 0.510 0.317 0.486 GRank (Ebisu and Ichise, 2019) 0.470 0.539 0.322 0.489 ConvR (Jiang et al., 2019) 0.475 0.537 0.350 0.528 MuRE (Balazevic et al., 2019b) 0.475 0.554 0.336 0.521 RotatE (Sun et al., 2019) 3340 0.476 0.571 177 0.338 0.533 QuatE (Zhang et al., 2019) 3472 0.481 0.564 176 0.311 0.495 InteractE (Vashishth et al., 2020) 5202 0.463 0.528 172 0.354 0.535 ConvKB (Nguyen et al., 2018)b 3433 0.249 0.524 309 0.243 0.421 CapsE (Nguyen et al., 2019)b 718 0.415 0.559 403 0.150 0.356 KBGAT (Nathani et al., 2019)b 1921 0.412 0.554 270 0.157 0.331 ReInceptionE (ours) 1894 0.483 0.582 173 0.349 0.528 ConvKB (Nguyen et al., 2018)a 2554 0.248 0.525 257 0.396 0.517 CapsE (Nguyen et al., 2019)a 719 0.415 0.560 303 0.523 0.593 KBGAT (Nathani et al., 2019)a 1940 0.440 0.581 210 0.518 0.626 Table 2: Link prediction results on WN18RR and FB15k-237 test sets. * denotes that the results are taken from (Dettmers et al., 2018), the superscript a represents the results reported in the original papers while b represents the results are taken from (Sun et al., 2020), other results are directly taken from the corresponding papers. Both MRR and Hits@1 have a strong correlation, thus we do not report the results of Hits@1 since it does not give any new insight (Nguyen et al., 2019). The best results are in bold and the second best results are in underline. years, WN18RR and FB15k-237 have become the most popular datasets for the KGE task. Table 1 shows the summary statistics of the datasets. Implementations: For a test triple (h, r, t), the purpose of KGE task is to predict missing links, e.g. predict tail entity t given head entity h and relation r or predict head entity h given tail entity t and relation r. To evaluate our method, three metrics are used, including Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hit@10 (e.g. the accuracy in top 10 predictions). Please note that lower MR, higher MRR and Hits@10 indicate better performance. We follow the “Filtered” setting protocol (Bordes et al., 2013) to evaluate our model, i.e., ranking all the entities excluding the set of other true entities that appeared in training, validation and test sets. We initialize the embedding of entity and relation in our ReInceptionE model using the pre-trained embeddings with 100-dimension used in (Nguyen et al., 2019). We use Adam (Kingma and Ba, 2015) to optimize the model. The parameters of our model are selected via grid search according to the MRR on the validation set. We select the dropout rate from {0.1, 0.2, 0.4, 0.5}, the learning rate from {0.001, 0.0005, 0.0002, 0.0001} , the L2 norm of parameters from {1e−3, 1e−5, 1e−8}, the batch size from {32, 64, 128, 256, 512} and the smoothing parameter λ in Equation 13 from {1, 5, 10}. Finally, the learning rate is set to 0.0002 for WN18RR and 0.0001 for FB15k-237. The L2 norm of parameters is set to 1e−5. The batch size is set to 256. The dropout rate is set to 0.4 for WN18RR and 0.2 for FB15k-237. The smoothing parameter in Equation 13 is set to λ = 5. The number of filters for each convolution operation in the Inception module is set to 32. We 5935 Models WN18RR FB15k-237 MR MRR Hits@10 MR MRR Hits@10 ConvE 4187 0.430 0.520 244 0.325 0.501 KBGAT 1921 0.412 0.554 270 0.157 0.331 InceptionE 2317 0.451 0.563 215 0.334 0.518 ReInceptionE w/o N 1942 0.449 0.573 185 0.348 0.525 ReInceptionE w/o E 1809 0.412 0.569 186 0.343 0.522 ReInceptionE 1894 0.483 0.582 173 0.349 0.528 Table 3: Impact of different modules contributes the KGE task. observe that MRR performance increases slowly, starting to stagnate around 200 epochs. Finally, we train the model up to 200 epoches in the following experiments. The source codes are available at https://github.com/JuneTse/ReInceptionE. 3.2 Main Results We compare our results with various state-of-theart methods. Experimental results are summarized in Table 2. For all KGE models, a key step is to create the invalid triples to construct the negative samples. Most recently, Sun et al. (2020) investigated the inappropriate evaluation problem happened in ConvKB (Nguyen et al., 2018), CapsE (Nguyen et al., 2019) and KBGAT (Nathani et al., 2019). In fact, this issue comes from the unusual score distribution, e.g., the score function for some invalid triples gets the same values as the valid triples. Sun et al. (2020) also found that KBGAT removed the invalid triples when they appeared in the test set during negative sampling, suffering from the leakage of test triples. Therefore, we take the results (marked with the superscript b) from (Sun et al., 2020) for ConvKB, CapsE and KBGAT. Besides, we also list the results reported in the original papers (marked with the superscript a). From Table 2, we can see that our proposed ReInceptionE obtains competitive results compared with the state-of-the-art methods. On WN18RR dataset, the ReInceptionE achieves the best results using Hits@10 and MRR, and the second-best results using MR. On FB15k-237 dataset, the ReInceptionE obtains the second-best results using MR, and comparable results using MRR and Hits@10. Our proposed ReInceptionE is closely related to ConvE (Dettmers et al., 2018) and KBGAT (Nathani et al., 2019). Compared with ConvE, ReInceptionE achieves large performance gains on both WN18RR and FB15k-237 (ConvE vs. ReInceptionE). The reason is that instead of simply concatenating the head and relation embeddings, ReInceptionE takes head and relation as two channels of the input and applies the Inception network to capture the rich interactions, which is able to learn expressive features by using filters with various scales. Unlike KBGAT, the ReInceptionE takes the (entity, relation) pair as a query and utilizes the relation-aware attention mechanism to gather the most relevant local neighbors and global entity information for the given query. The results again verify the effectiveness of the relation-aware local and global information for KGE. Some other methods have been proposed to address the KGE task, such as pLogicNet (Ou and Tang, 2019), RPJE (Niu et al., 2020), CoKE (Wang et al., 2019), TuckER (Balazevic et al., 2019a), D4GUmbel (Xu and Li, 2019) and HAKE (Zhang et al., 2020). pLogicNet (Ou and Tang, 2019) and RPJE (Niu et al., 2020) leverage logic rules to improve the performance. CoKE (Wang et al., 2019) uses Transformer (Vaswani et al., 2017) to encode contextualized representations. HAKE (Zhang et al., 2020) embeds entities in the polar coordinate system to learn semantic hierarchies. D4-Gumbel (Xu and Li, 2019) uses the dihedral group to model relation composition. TuckER (Balazevic et al., 2019a) uses Tucker decomposition to learn tensor factorization for KGE. These methods take a series of different ways to model the KGE task. For example, logic rules play an important role to determine whether a triple is valid or not, we suspect that the performance of our proposed ReInceptionE can be further improved when taking the logic rules into account. We will leave the comparison and deep analysis in the future work. 3.3 Impact of Different Modules We describe the experimental results in Table 3 to investigate the impact of different modules in ReInceptionE. In Table 3, “InceptionE” is the baseline 5936 Models Predicting head Predicting tail 1-1 1-N N-1 N-N 1-1 1-N N-1 N-N WN18RR ConvE 0.975 0.414 0.110 0.950 0.975 0.153 0.303 0.949 InceptionE 0.976 0.587 0.128 0.957 0.952 0.231 0.482 0.957 ReInceptionE 0.976 0.586 0.152 0.961 0.976 0.272 0.494 0.958 FB15k-237 ConvE 0.303 0.590 0.137 0.400 0.272 0.088 0.845 0.545 InceptionE 0.573 0.624 0.175 0.452 0.557 0.124 0.865 0.557 ReInceptionE 0.609 0.651 0.185 0.473 0.594 0.149 0.872 0.603 Table 4: Link prediction results for each relation category on the WN18RR and FB15k-237 test sets using Hits@10. Following (Bordes et al., 2013), we classify relations into four groups: one-to-one (1-1), one-to-many (1-N), manyto-one (N-1) and many-to-many (N-N). Query and Target Top Neighbors and Predictions Query: (Jack London, nationality, ?) Target: America Top Neighbors: (place lived, Oakland) Prob: 0.415 (place of birth, San Francisco) Prob: 0.353 (Berkeley, student) Prob: 0.083 (influence by, Friedrich Nietzsche) Prob: 0.042 (influence by, Charles Darwin) Prob: 0.031 Top Predictions: America, United Kingdom, Canada, Australia, Germany Query: (Jerry Lewis, languages, ?) Target: English Language Top Neighbors: (place of birth, Newark) Prob: 0.197 (place lived, Newark) Prob: 0.173 (Nutty Professor II, story by) Prob: 0.105 (award nomination, Razzie Award for Worst Actor) Prob: 0.089 (nominated for, Law & Order: Special Victims Unit) Prob: 0.082 Top Predictions: English Language, Spanish Language, French Language, Italian Language, Japanese Language Table 5: Two examples of top 5 attention neighbors and predictions for the given queries. model without using relation-aware local neighbors and global entities. “ReInception w/o N” is the model without using relation-aware local neighbor information while “ReInception w/o E” is the model without using relation-aware global entity information. Besides, we also take two closely related models ConvE and KBGAT for fair comparison. From Table 3, we can see that our baseline InceptionE outperforms the closely related CNN-based model ConvE. Compared with ConvE, InceptionE is more powerful because it can capture the rich interaction features by using filters with various scales. And the ReInceptionE, which incorporates relation-aware local neighborhood and global entity information, outperforms the related graph-based model KBGAT. Table 3 also shows that the ReInceptionE outperforms InceptionE, “ReInception w/o N” and “ReInception w/o E” by a large margin on both datasets, which reconfirms our observations that relation-aware local neighbors and global entities can play different contributions for KGE. 3.4 Evaluation on different Relation Types In this subsection, we present the experimental results on different relation types on WN18RR and FB15k-237 using Hits@10. We choose the closely related model ConvE, as well as InceptionE as the baselines. Following (Bordes et al., 2013), we classify the relations into four groups: one-to-one (1-1), one-to-many (1-N), many-to-one (N-1) and manyto-many (N-N), based on the average number of tails per head and the average number of heads per tail. Table 4 shows the link prediction results for each relation category. From Table 4, we find that InceptionE achieves better performance than ConvE for all relation types, indicating that increasing the number of interactions between head and re5937 lation embeddings is indeed beneficial to KGE task. Furthermore, our proposed ReInceptionE significantly outperforms ConvE and InceptionE for all relation types. In particular, ReInceptionE obtains larger improvements for complex relations, such as one-to-many, many-to-one and many-to-many. This again verifies our observations that increasing the interactions and taking the local-global structural information allows the model to capture more complex relations. 3.5 Case Study In order to further analyze how relation-aware neighbors contribute to KGE task, we give two examples in Table 5. For the query (Jack London, nationality, ?), ReInceptionE assigns the highest attention scores for neighbors (place lived, Oakland), since Oakland and America are close to each other in embedding space because of other relations between them. And the top predictions for the query are a set of entities with the type of country. For the second example (Jerry Lewls, languages, ?), ReInceptionE assigns the very high score for neighbor (place of birth, Newark). This can allow us to project (place of birth, Newark) into the Jerry Lewis region of the embedding space, which can lead to a high score for predicting the target English Language. These examples give clear evidence of how our proposed ReInceptionE benefits the KGE task. 4 Conclusions In this paper, we propose a novel relation-aware Inception network for knowledge graph embedding, called ReInceptionE. ReInceptionE takes the benefits of ConvE and KBGAT together. The proposed method first employs Inception network to learn the query embedding, with the aim of increasing the interaction between head and relation embeddings, while at the same time to keep the parameter efficient. Then, we gather the relation-aware local neighborhood and global entity information with an attention mechanism and enrich the query embedding with the joint local-global structural information. Empirical studies demonstrate that our proposed method obtains comparative performance compared with the state-of-the-art performance on two widely used benchmark datasets WN18RR and FB15k-237. Acknowledgments This work was supported by the National Natural Science Foundation of China under Grants 61972290 and 61972173, and also supported by the National Key R&D Program of China under Grant 2018YFC1604000. This research was also supported in part by the research grant from Natural Sciences and Engineering Research Council (NSERC) of Canada and York Research Chairs (YRC) program. We thank anonymous reviewers for their thorough review comments on this paper. References Ivana Balazevic, Carl Allen, and Timothy Hospedales. 2019a. TuckER: Tensor factorization for knowledge graph completion. In Proceedings of the EMNLPIJCNLP. Ivana Balazevic, Carl Allen, and Timothy M. Hospedales. 2019b. Multi-relational poincar´e graph embeddings. In Proceedings of the NeurIPS. Trapit Bansal, Da-Cheng Juan, Sujith Ravi, and Andrew McCallum. 2019. A2N: Attending to neighbors for knowledge graph inference. In Proceedings of the ACL. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the SIGMOD. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Proceedings of the NIPS. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the AAAI. Takuma Ebisu and Ryutaro Ichise. 2019. Graph pattern entity ranking model for knowledge graph completion. In Proceedings of the NAACL. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In Proceedings of the AISTATS. Yanchao Hao, Yuanzhe Zhang, Kang Liu, Shizhu He, Zhanyi Liu, Hua Wu, and Jun Zhao. 2017. An endto-end model for question answering over knowledge base with cross-attention combining global knowledge. In Proceedings of the ACL. He He, Anusha Balakrishnan, Mihail Eric, and Percy Liang. 2017. Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings. In Proceedings of the ACL. 5938 Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge graph embedding via dynamic mapping matrix. In Proceedings of the ACL. Xiaotian Jiang, Quan Wang, and Bin Wang. 2019. Adative convolution for multi-relational learning. In Proceedings of the NAACL-HLT. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the ICLR. Timoth´ee Lacroix, Nicolas Usunier, and Guillaume Obozinski. 2018. Canonical tensor decomposition for knowledge base completion. In Proceedings of the ICML. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, S¨oren Auer, and Christian Bizer. 2015. Dbpedia – a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the AAAI. Maas, Andrew L, Awni Y Hannun, and Andrew Y Ng. 2013. Rectifier nonlinearities improve neural network acoustic models. In ICML Workshop on Deep Learning for Audio, Speech and Language Processing. Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2Seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. In Proceedings of the ACL. Christian Meilicke, Melisachew Wudage Chekol, Daniel Ruffinelli, and Heiner Stuckenschmidt. 2019. Anytime bottom-up rule learning for knowledge graph completion. In Proceedings of the IJCAI. Deepak Nathani, Jatin Chauhan, Charu Sharma, and Manohar Kaul. 2019. Learning attention-based embeddings for relation prediction in knowledge graphs. In Proceedings of the ACL. Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2018. A novel embedding model for knowledge base completion based on convolutional neural network. In Proceedings of the NAACL. Dai Quoc Nguyen, Thanh Vu, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2019. A capsule network-based embedding model for knowledge graph completion and search personalization. In Proceedings of the NAACL. Guanglin Niu, Yongfei Zhang, Bo Li, Peng Cui, Si Liu, Jingyang Li, and Xiaowei Zhang. 2020. Rule-guided compositional representation learning on knowledge graphs. In Proceedings of the AAAI. Byungkook Oh, Seungmin Seo, and Kyong-Ho Lee. 2018. Knowledge graph completion by contextaware convolutional learning with multi-hop neighborhoods. In Proceedings of the CIKM. Meng Ou and Jian Tang. 2019. Probabilistic logic neural networks for reasoning. In Proceedings of the NeurIPS. Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In Proceedings of the ESWC. Chao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, and Bowen Zhou. 2019. End-to-end structure-aware convolutional networks for knowledge base completion. In Proceedings of the AAAI. Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In Proceedings of the ICLR. Zhiqing Sun, Shikhar Vashishth, Soumya Sanyal, Partha P. Talukdar, and Yiming Yang. 2020. A reevaluation of knowledge graph completion methods. In Proceedings of the ACL. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In Proceedings of the CVPR. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the CVPR. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of the EMNLP. Th´eo Trouillon, Johannes Welbl, Sebastian Riedel, ´Eric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceedings of ICML. Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, Nilesh Agrawal, and Partha Talukdar. 2020. Interacte: Improving convolution-based knowledge graph embeddings by increasing feature interactions. In Proceedings of the AAAI. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the NIPS. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2018. Graph attention networks. In Proceedings of the ICLR (Poster). 5939 Kai Wang, Yu Liu, Xiujuan Xu, and Dan Lin. 2018. Knowledge graph embedding with entity neighbors and deep memory network. CoRR, abs/1808.03752. Quan Wang, Pingping Huang, Haifeng Wang, Songtai Dai, Wenbin Jiang, Jing Liu, Yajuan Lyu, Yong Zhu, and Hua Wu. 2019. CoKE: Contextualized knowledge graph embedding. CoRR, abs/1911.02168. Canran Xu and Ruijiang Li. 2019. Relation embedding with dihedral group in knowledge graph. In Proceedings of the ACL, pages 263–272. Bishan Yang and Tom Mitchell. 2017. Leveraging knowledge bases in LSTMs for improving machine reading. In Proceedings of the ACL. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In Proceedings of ICLR. Shihui Yang, Jidong Tian, Honglun Zhang, Junchi Yan, Hao He, and Yaohui Jin. 2019. Transms: Knowledge graph embedding for complex relations by multidirectional semantics. In Proceedings of the IJCAI. Shuai Zhang, Yi Tay, Lina Yao, and Qi Liu. 2019. Quaternion knowledge graph embedding. In Proceedings of the NeurIPS. Zhanqiu Zhang, Jianyu Cai, Yongdong Zhang, and Jie Wang. 2020. Learning hierarchy-aware knowledge graph embeddings for link prediction. In Proceedings of the AAAI.
2020
526
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5940–5950 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5940 Relabel the Noise: Joint Extraction of Entities and Relations via Cooperative Multiagents Daoyuan Chen1 Yaliang Li1 Kai Lei2 Ying Shen3 ∗ 1Alibaba Group 2School of Electronics and Computer Engineering, Peking University 3School of Intelligent Systems Engineering, Sun Yat-Sen University 1{daoyuanchen.cdy, yaliang.li}@alibaba-inc.com [email protected], [email protected] Abstract Distant supervision based methods for entity and relation extraction have received increasing popularity due to the fact that these methods require light human annotation efforts. In this paper, we consider the problem of shifted label distribution, which is caused by the inconsistency between the noisy-labeled training set subject to external knowledge graph and the human-annotated test set, and exacerbated by the pipelined entity-then-relation extraction manner with noise propagation. We propose a joint extraction approach to address this problem by re-labeling noisy instances with a group of cooperative multiagents. To handle noisy instances in a fine-grained manner, each agent in the cooperative group evaluates the instance by calculating a continuous confidence score from its own perspective; To leverage the correlations between these two extraction tasks, a confidence consensus module is designed to gather the wisdom of all agents and re-distribute the noisy training set with confidence-scored labels. Further, the confidences are used to adjust the training losses of extractors. Experimental results on two realworld datasets verify the benefits of re-labeling noisy instance, and show that the proposed model significantly outperforms the state-ofthe-art entity and relation extraction methods. 1 Introduction The extraction of entities and relations has long been recognized as an important task within natural language processing, as it facilitates text understanding. The goal of the extraction task is to identify entity mentions, assign predefined entity types, and extract their semantic relations from text corpora. For example, given a sentence “Washington is the president of the United States of America”, ∗Corresponding author. an extraction system will find a PRESIDENT OF relation between PERSON entity “Washington” and COUNTRY entity “United States of America”. A major challenge of the entity and relation extraction task is the absence of large-scale and domain-specific labeled training data due to the expensive labeling efforts. One promising solution to address this challenge is distant supervision (DS) (Mintz et al., 2009; Hoffmann et al., 2011), which generates labeled training data automatically by aligning external knowledge graph (KG) to text corpus. Despite its effectiveness, the aligning process introduces many noisy labels that degrade the performance of extractors. To alleviate the introduced noise issue of DS, extensive studies have been performed, such as using probabilistic graphical models (Surdeanu et al., 2012), neural networks with attention (Zeng et al., 2015; Lin et al., 2016) and instance selector with reinforcement learning (RL) (Qin et al., 2018; Feng et al., 2018). However, most existing works overlooked the shifted label distribution problem (Ye et al., 2019), which severely hinders the performance of DS-based extraction models. Specifically, there is a label distribution gap between DS-labeled training set and human-annotated test data, since two kinds of noisy labels are introduced and they are subject to the aligned KG: (1) False Positive: unrelated entity pair in the sentence while labeled as relations in KG; and (2) False Negative: related entity pair while neglected and labeled as NONE. Existing denoising works assign low weights to noisy instances or discard false positives while not recovering the original labels, leaving the shifted label distribution problem unsolved. Moreover, most denoising works assume that the target entities have been extracted, i.e., the entity and relation extraction is processed in a pipe-lined manner. By extracting entities first and then classifying predefined relations, the entity extraction 5941 Confidence Evaluators Re-distribution with Confidence-Scored Labels Entity-View Agents Noisy Training Dataset Positive Instances False Positives Relation-View Agents Confidence-Scored Labels Positive Instances Negative Instances False Negatives False Positives Partial False Positive Partial False Negatives Reward Confidence Consensus Action Action Base Extractors Iteratively Re-Train Negative Instances False Negatives Entity Classifier Relation Classifier Figure 1: Overview of the proposed method. A group of multiagents are leveraged to evaluate the confidences of noisy instances from different extraction views. Base extractors are refined by iteratively training on the redistributed instances with confidence-scored labels. errors will be propagated to the relation extractor, introducing more noisy labels and exacerbating the shifted label problem. Besides, there are some correlations and complementary information between the two extraction tasks, which are under-utilized but can provide hints to reduce noises more precisely, e.g., it is unreasonable to predict two COUNTRY entities as the relation PRESIDENT OF. In this paper, to reduce the shifted label distribution gap and further enhance the DS-based extraction models, we propose a novel method to re-label the noisy training data and jointly extract entities and relations. Specifically, we incorporate RL to re-label noisy instances and iteratively retrain entity and relation extractors with adjusted labels, such that the labels can be corrected by trial and error. To leverage the correlations between the two extraction tasks, we train a group of cooperative multiagents to evaluate the instance confidence from different extraction views. Through a proposed confidence consensus module, the instances are re-labeled with confidence-scored labels, and such confidence information will be used to adjust the training loss of extractors. Finally, the performances of extractors are refined by exploring suitable label distributions with iterative re-training. Empirical evaluations on two real-world datasets show that the proposed approach can effectively help existing extractors to achieve remarkable extraction performance with noisy labels, and the agent training is efficient with the help of correlations between these two extraction tasks. 2 Methodology 2.1 Overview In this research, we aim to refine entity extractor and relation extractor trained with DS, by incorporating a group of cooperative multiagents. Formally, given a DS training corpus D = {s1, . . . , sn}, an entity extractor θ′ e and a relation extractor θ′ r trained on D are input into the multiagents. The agents re-distribute D with confidence-scored labels and output two refined extractors θ∗ e and θ∗ r using the adjusted labels. Towards this purpose, we model our problem as a decentralized multiagents RL problem, where each agent receives local environmental observation and takes action individually without inferring the policies of other agents. It is hard to directly evaluate the correctness of adjusted noisy labels since we do not know the “gold” training label distributions suitable to the test set. Nonetheless, we can apply RL to indirectly judge the re-labeling effect by using performance scores on an independent validation set as rewards, which is delayed over the extractor re-training. Further, the decentralization setting allows the interaction between the distinct information of entity and relation extractors via intermediate agents. As shown in Figure 1, a group of agents acts as confidence evaluators, and the external environment consists of training instances and classification results of extractors. Each agent receives a private observation from the perspective of entity 5942 extractor or relation extractor, and makes an independent action to compute a confidence score of the instance. These actions (confidence scores) will then be considered together by the confidence consensus module, which determines whether the current sentence is positive or negative and assigns a confidence score. Finally, the updated confidences are used to retrain extractors, the performance score on validation set and the consistent score of the two extractors are combined into rewards for agents. The proposed method can be regarded as a postprocessing plugin for existing entity and relation extraction model. That is, we design a general framework of the states, actions and rewards by reusing the inputs and outputs of the extractors. 2.2 Confidence Evaluators as Agents A group of cooperative multiagents are used to evaluate the confidence of each instance. These multiagents are divided into two subgroups, which act from the perspective of entity and relation respectively. There can be multiple agents in each subgroup for the purpose of scaling to larger observation space and action space for better performance. Next, we will detail the states, actions and rewards of these agents. States The states Se for entity-view agents and Sr for relation-view agents represent their own viewpoint to evaluate the instance confidence. Specifically, entity-view agents evaluate sentence confidence according to three kinds of information: current sentence, the entity extraction results (typed entity) and the noisy label types. Similarly, relationview agents make their decisions depending on the current sentence, the relation types from relation extractor and the noisy label types from DS. Most entity and relation extractors encode the semantic and syntactic information of extracted sentences into low-dimension embeddings as their inputs. For entity types and relation types, we also encode them into embeddings and some extractors have learned these vectors such as CoType (Ren et al., 2017). Given reused extractors, we denote the encoded sentence vector as s, the extracted type vector as te and tr for entity and relation respectively, and DS type vectors as te d and tr d for entity and relation respectively. We reuse the sentence and type vectors of base extractors to make our approach lightweight and pluggable. Finally, we average the extracted and DS type embeddings to decrease the size of observation space, and concatenate them with the sentence embedding s to form the states Se and Sr for entity/relation agents respectively as follows: Se = s∥(te + te d)/2, Sr = s∥(tr + tr d)/2, (1) Note that we have encoded some semantics into the type vectors, e.g., the margin-based loss used in CoType enforces the type vectors are closer to their candidate type vectors than any other noncandidate types. Intuitively, in the representation spaces, the average operation leads in the midpoint of extracted type vector and DS type vector, which partially preserves the distance property among the two vectors and other type vectors, so that helps form distinguishable states. Actions To assign confidence in a fine-grained manner and accelerate the learning procedure, we adopt a continuous action space. Each agent uses a neural policy network Θ to determine whether the current sentence is positive (conform with the extracted type ti) or negative (“None” type) and computes a confidence score c. We model this action as a conditional probability prediction, i.e., estimate the probability as confidence given by the extracted type ti and the current state S: c = p(positive|ti, Θ, S). We adopt gated recurrent unit (GRU) as policy network, which outputs the probability value using sigmoid function. A probability value (confidence score) which is close to 1/0 means that the agent votes a sentence as positive/negative with a high weight. To handle huge state spaces (e.g., there are thousands of target types in our experimental dataset) and make our approach scalable, here we divide and conquer the state space by using more than one agent in entity-view and relation-view groups. The target type set is divided equally by agent number and each agent only is in charge of a part of types. Based on the allocation and DS labels, one sentence is evaluated by only one relation agent and two entity agents at a time, meanwhile, the other agents are masked. Re-labeling with Confidence Consensus To leverage the wisdom of crowds, we design a consensus strategy for the evaluated confidences from multiagents. This is conducted by two steps: gather confidences and re-label with confidence score. Specifically, we calculate an averaged score as ¯c = csum/3, where csum is the sum of all agent confidences and the dividing means three agents 5943 evaluated the present sentence due to the above masking action strategy. Then we label the current sentence as negative (“None” type) with confidence C = 1 −¯c if ¯c ≤0.5, otherwise we label the current sentence as positive (replace noisy label with extracted type) with confidence C = ¯c. This procedure can be regarded as weighted voting and re-distribute the training set with confidence-scored labels as shown in the right part of Figure 1, where some falsely labeled instances are put into intended positions or assigned with low confidences. Rewards The reward of each agent is composed of two parts: shared global reward g expressing correlations among sub-tasks, and separate local rewards restricting the reward signals to different three agents for different sentences (recall that we evaluate each sentence by different agents w.r.t their responsible types). Specifically, the global reward g can give hints for denoising and here we adopt a general, translation-based triple score as used in TransE (Bordes et al., 2013) g = ||t1 + tr −t2||, where t1, tr and t2 are embeddings for triple (E1, R, E2) and pre-trained by TransE. The score is used to measure the semantic consistency of each triple and can be easily extended with many other KG embedding methods (Wang et al., 2017). As for the separate local reward, we use F1 scores F e 1 and F r 1 to reflect the extractor performance, which are gained by entity extractor and relation extractor on an independent validation dataset 1 respectively. Finally, to control the proportions of two-part rewards, we introduce a hyper-parameter α, which is shareable for ease of scaling to multiple agents as: re = α ∗F e 1 −g, rr = α ∗F r 1 −g. (2) 2.3 Model Learning 2.3.1 Loss Correction for Extractors With the evaluated confidences and re-labeled instances, we adjust the training losses of entity extractor and relation extractor to alleviate the performance harm from noise and shifted label distribution. Denote the original loss of extractor as ℓ, the new loss ℓ′ is adjusted by an exponential scaling factor λ and confidence C as : ℓ′ = Cλℓ. Intuitively, a small confidence score C and a large λ indicate that the current instance has almost no impact on the model optimization. This can alleviate 1To gain a relatively clean data, we randomly select 20% data from the original training set, extract them using pretrained CoType model and retain only one instance for each sentence whose DS label is the same as the extracted label. Algorithm 1 Training Framework for Extractors Input: Noisy training data D, pre-trained entity extractor θ′ e, pre-trained relation extractor θ′ r Output: refined entity/relation extractor θ∗ e, θ∗ r 1: pre-train policy networks of agents based on θ′ e and θ′ r 2: init: best F1∗ e ←F1(θ′ e), best F1∗ r ←F1(θ′ r) 3: for epoch i = 1 →N do 4: init: current extractors parameters θe ←θ′ e, θr ←θ′ r 5: for batch di ∈D do 6: extractors generate Se/Sr as Equ. (1) 7: agents take actions (confidences) 8: redistribute instances with confidences 9: train θe/θr with scaled losses ℓ′ e/ℓ′ r 10: calculate rewards re and rr as Equ. (2) 11: end for 12: if F1(θe) > F1∗ e then F1∗ e ← F1(θe), θ∗ e ←θe 13: if F1(θr) > F1∗ r then F1∗ r ← F1(θr), θ∗ r ←θr 14: end for side-effects caused by noises and prevent the gradient being dominated by noisy labels, especially for those with divergent votes since the averaging in confidence consensus module leads to a small C. 2.3.2 Training Algorithm Pre-training Many RL-based models introduce pre-training strategies to refine the agent training efficiency (Qin et al., 2018; Feng et al., 2018). In this study, we pre-train our models in two aspects: (1) we first pre-train entity and relation extractors to be refined as environment initialization, which is vital to provide reasonable agent states (embeddings of sentences and extracted types). (2) we then pre-train the policy networks of agents to gain a preliminary ability to evaluate confidence. In order to guide the instance confidence evaluation, we extract a small part of the valid data. The relatively clean DS type labels of the valid data are used to form states. The binary label is assigned according to the valid data and the policy networks are pre-trained for several epochs. Although the binary labels from valid data are not exactly the continuous confidence, the policy networks gain a better parameter initialization than random initialization by this approximate training strategy. 5944 Iterative Re-training With the pre-trained extractors and policy networks, we retrain extractors and agents as Algorithm 1 detailed. The agents refine extractors in each epoch and we record parameters of extractors that achieve best F1 performance. For each data batch, entity and relation extractor perform extraction, form the states Se and Sr as Equation (1), and send them to entity and relation agents respectively. Then agents take actions (evaluate confidences) and redistribute instance based on confidences consensus module (Section 2.2). Finally extractors are trained with confidences and give rewards as Equation (2). Curriculum Learning for Multiagents It is difficult to learn from scratch for many RL agents. In this study, we extend the curriculum learning strategy (Bengio et al., 2009) to our cooperative multiagents. The motivation is that we can leverage the complementarity of the two tasks and enhance the agent exploration by smoothly increasing the policy difficulty. To be more specific, we maintain a priority queue and sample instances ordered by their reward values. Once the reward of current sentence excesses the training reward threshold rthreshold or the queue is full, we then learn agents policies using Proximal Policy Optimization (PPO) (Schulman et al., 2017) algorithm, which achieves good performances in many continuous control tasks. Algorithm 2 details the training procedure. 3 Experiments 3.1 Experimental Setup Datasets We evaluate our approach on two public datasets used in many extraction studies (Pyysalo et al., 2007; Ling and Weld, 2012; Ren et al., 2017): Wiki-KBP: the training sentences are sampled from Wikipedia articles and the test set are manually annotated from 2013 KBP slot filling task; BioInfer: the dataset is sampled and manually annotated from biomedical paper abstracts. The two datasets vary in domains and scales of type set, detailed statistics are shown in Table 1. Datasets Wiki-KBP BioInfer #Relation / entity types 19 / 126 94 / 2,200 #Train M r / M e 148k / 247k 28k / 53k #Test M r / M e 2,948 / 1,285 3,859 / 2,389 Table 1: Datasets statistics. M r and M e indicates relation and entity mentions respectively. Algorithm 2 Curriculum Training with PPO for each Agent Input: Data batch di, queue size l, pre-trained policy network with parameter Θ′ Output: Policy network parameter Θ 1: initialize an empty priority queue q with size l 2: for sentence sj ∈di do 3: if rc > rthreshold or q is full then 4: run policy Θ′ on environment sj 5: compute advantage estimate ˆA using Generalized Advantage Estimator (GAE) (Schulman et al., 2015) 6: optimize agent loss L (adaptive KL penalty form) w.r.t Θ using SGD 7: Θ′ ←Θ 8: if q is full then 9: pull highest priority sentence 10: end if 11: else 12: insert sj into q with priority rc 13: end if 14: end for Baselines For relation extraction, we compare with both pipe-lined methods and joint extraction methods: MintZ (Mintz et al., 2009) is a featurebased DS method using a logistic classifier; MultiR (Hoffmann et al., 2011) models noisy DS labels with multi-instance multi-label learning; DS-Joint (Li and Ji, 2014) jointly extracts entities and relations using structured perceptron; FCM (Gormley et al., 2015) introduces a neural model to learn linguistic compositional representations; PCNN (Zeng et al., 2015) is an effective relation extraction architecture with piece-wise convolution; CoType (Ren et al., 2017) is a state-of-the-art joint extraction method leveraging representation learning for both entity and relation types; RRL-PCNN (Qin et al., 2018) is a state-of-the-art RL-based method, which takes PCNN as base extractor and can also be a plugin to apply to different relation extractors; ARNOR (Jia et al., 2019) is a state-of-the-art de-noising method, which proposes attention regulation to learn relation patterns; BA-fix-PCNN (Ye et al., 2019) greatly improves the extraction performance by introducing 20% samples of the test set and estimate its label distribution to adjust the classifier of PCNN. For entity extraction methods, we compare with a supervised type classification method, HYENA (Yosef et al., 2012); a heterogeneous partial-label 5945 Wiki-KBP BioInfer Methods S-F1 Ma-F1 Mi-F1 S-F1 Ma-F1 Mi-F1 HYENA 0.26 0.43 0.39 0.52 0.54 0.56 FIGER 0.29 0.56 0.54 0.69 0.71 0.71 WSABIE 0.35 0.55 0.50 0.64 0.66 0.65 PLE 0.37 0.57 0.53 0.70 0.71 0.72 CoType 0.39 0.61 0.57 0.74 0.76 0.75 MRL-CoType ( improvements) 0.42±7.2e-3 0.64±1.1e-2 0.60±8.3e-3 0.77±6.5e-3 0.79±1.3e-2 0.78±7.4e-3 (+7.69%) (+4.92%) (+5.26%) (+4.05%) (+3.95%) (+4.00%) Table 2: NER performance on two datasets, 3-time average results with standard deviations are reported. Wiki-KBP BioInfer Methods Precision Recall F1 Precision Recall F1 MintZ 0.296 0.387 0.335 0.572 0.255 0.353 MultiR 0.325 0.278 0.301 0.459 0.221 0.298 DS-Joint 0.444 0.043 0.078 0.584 0.001 0.002 FCM 0.151 0.500 0.301 0.535 0.168 0.255 ARNOR 0.453 0.338 0.407 0.589 0.382 0.477 BA-Fix-PCNN 0.457 0.341 0.409 0.587 0.384 0.478 RRL-PCNN 0.435 0.322 0.392 0.577 0.381 0.470 PCNN 0.423 0.310 0.371 0.573 0.369 0.461 MRL-PCNN (improvements) 0.461±2.5e-3 0.325±2.3e-3 0.407±1.4e-3 0.590±1.1e-3 0.386±2.3e-3 0.483±2.8e-3 (+8.98%) (+4.83%) (+9.70%) (+2.97%) (+4.61%) (+4.77%) CoType 0.348 0.406 0.369 0.536 0.424 0.474 MRL-CoType (improvements) 0.417±1.9e-3 0.415±1.6e-3 0.416±1.7e-3 0.595±2.1e-3 0.437±1.8e-3 0.498±2.0e-3 (+19.83%) (+2.22%) (+12.74%) (+11.01%) (+3.01%) (+5.63%) Table 3: End-to-end relation extraction performance, 3-time average results with standard deviations are reported. embedding method, PLE (Ren et al., 2016); and two DS methods FIGER (Ling and Weld, 2012) and WSABIE (Yogatama et al., 2015). Multiagents Setup To evaluate the ability of our approach to refine existing extractors, we choose two basic extractors for our Multiagent RL approach, CoType and PCNN, and denote them as MRL-CoType and MRL-PCNN respectively. Since PCNN is a pipe-lined method, we reuse a pre-trained and fixed CoType entity extractor, and adopt PCNN as base relation extractor to adapt to the joint manner. For the CoType, we use the implementation of the original paper 2, and adopt the same sentence dimension, type dimension and hyper-parameters settings as reported in (Ren et al., 2017). For the PCNN, we set the number of kernel to be 230 and the window size to be 3. For the KG embeddings, we set the dimension to be 50 and pre-train them by TransE. We use Stochasitc Gradient Descent and learning rate scheduler with cosine annealing to optimize both the agents and extractors, the learning rate range and batch size is set to be [1e-4, 1e-2] and 64 respectively. We implement our RL agents using a scalable RL library, RLlib (Liang et al., 2018), and adopt 2/8 relation agents and 2/16 entity agents for Wiki2https://github.com/INK-USC/DS-RelationExtraction KBP/BioInfer datasets respectively, according to their scales of type sets. For the multi-agents, due to the limitation of RL training time, we set the PPO parameters as default RLlib setting and perform preliminary grid searches for other parameters. For the PPO algorithm, we set the GAE lambda parameter to be 1.0, the initial coefficient for KL divergence to be 0.2. The loss adjusting factor λ is searched among {1, 2, 4} and set to be 2, the reward control factors α is searched among {2e-1, 1, 2, 4} and set to be 2. For all agents, the dimensions of GRU is searched among {32, 64}, and the setting as 64 achieved sightly better performance than setting as 32, while the larger dimension setting leads to higher memory overhead for each agent. Hence we set it to be 32 to enable a larger scale of the agents. 3.2 Effectiveness of Multiagents 3.2.1 Performance on Entity Extraction We adopt the Macro-F1, Micro-F1 and Strict-F1 metrics (Ling and Weld, 2012) in the entity extraction evaluation. For Strict-F1, the entity prediction is considered to be “strictly” correct if and only if when the true set of entity tags is equal to the prediction set. The results are shown in Table 2 and we can see that our approach can effectively refine the base extractors and outperform all baseline 5946 Wiki-KBP BioInfer Settings Precision(%) Recall(%) F1(%) Precision(%) Recall(%) F1(%) Curriculum 41.7±0.19 41.5±0.16 41.6±0.17 59.5±0.21 43.7±0.18 49.8±0.20 Joint (w/o curriculum) 41.3±0.22 40.9±0.20 41.1±0.21 58.7±0.24 42.6±0.19 48.5±0.23 Separate (w/o joint) 38.8±0.24 40.5±0.27 38.4±0.25 54.7±0.27 41.3±0.23 47.6±0.26 Table 4: Ablation results of the MRL-CoType for end-to-end relation extraction. methods on all metrics. Note that the refinements on BioInfer is significant (t-test with p < 0.05) even though the BioInfer has a large entity type set (2,200 types) and the base extractor CoType has achieved a high performance (0.74 S-F1), which shows that our agents are capable of leading entity extractors towards a better optimization with noisy. 3.2.2 Performance on Relation Extraction Another comparison is the end-to-end relation extraction task, we report the precision, recall and F1 results in Table 3 and it illustrates that: (1) Our method achieves best F1 for Wiki-KBP, outperforms all baselines on all metrics for BioInfer data, and significantly refines both the two base extractors, PCNN and CoType (t-test with p < 0.05), demonstrating the effectiveness of our approach. (2) The improvements for CoType are larger than PCNN. Since CoType is a joint extraction model and leverages multi-agents better than the singletask extractor with fixed entity extractor. This shows the benefit of correlations between the two extraction tasks. (3) Using the same base relation extractor, the MRL-PCNN achieves significantly better improvements than RRL-PCNN (t-test with p < 0.05). Besides, the precision of RRL-PCNN method is relatively worse than recall, which is mainly caused by the noise propagation of entity extraction and its binary discard-or-retain action. By contrast, our model achieves better and more balanced results by leveraging the cooperative multiagents with finegrained confidences. (4) The MRL-PCNN gains comparable performance with BA-Fix-PCNN, which leverages the additional information from the test set to adjust softmax classifier. This verifies the effectiveness and the robustness of the proposed RL-based relabeling method to reduce the shifted label distribution gap without knowing the test set. 3.3 Ablation Analysis To evaluate the impact of curriculum learning strategy and joint learning strategy of our method, we compare three training settings: curriculum learnCurriculum Joint Separate 0 10 20 70 80 Epochs 1 0.6 0.2 -0.2 -0.6 -1 Average Rewards Curriculum Joint Separate 30 40 50 60 1 0.6 0.2 -0.2 -0.6 -1 Average Rewards Relation Agent Entity Agent Figure 2: Smoothed average rewards on Wiki-KBP data for two agents of MRL-CoType. The light-colored lines are un-smoothed rewards. ing, standard training procedure as described in Section 2.3; joint multiagents training without curriculum learning (randomly sample training instances); and separate training without the participation of other agents using a pipeline manner, i.e., train an entity agent with only entity extractor and train a relation agent with only relation extractor. The end-to-end relation extraction results are reported in Table 4. The curriculum setting and the joint setting achieve much better results than the separate training setting. This shows the superiority of cooperative multi-agents over single view extraction, which evaluates confidences with limited information. Besides, the curriculum setting achieves better results than the joint setting, especially on the BioInfer data, which has a larger type set and is more challenging than Wiki-KBP. This indicates the effectiveness of the curriculum learning strategy, which enhances the model ability to handle large state space with gradual exploration. Training efficiency is an important issue for RL methods since the agents face the explorationexploitation dilemma. We also compare the three settings from the view of model training. Figure 2 reports the average rewards for an entity agent and a relation agent on Wiki-KBP respectively. A high average reward indicates that the agent is trained 5947 0 5 10 15 20 25 30 relation mentions entity mentions proportion (%) N-to-P N-to-P-divergent P-to-N P-to-N-divergent Wiki-KBP 0 5 10 15 20 25 30 relation mentions entity mentions proportion (%) BioInfer N-to-P N-to-P-divergent P-to-N P-to-N-divergent Figure 3: Proportions of re-labeled instances for MRLCoType. “N-to-P” denotes the instances are re-labeled from negative to positive. “divergent” means that entity agents and relation agent have different evaluations about whether the instance is positive or negative. effectively since it made valuable decisions and received positive feedback. From it we have the following observations: (1) The curriculum setting and the joint setting gain better performance than the separate training, which is consistent with the end-to-end extraction results. The improvement comes from the mutual enhancement among agents, since the correlations between the two tasks can restrict the reward signals to only those agents involved in the success or failure on the task; (2) The curriculum learning achieves higher rewards than the other two settings with fewer epochs, since that the convergence to local optimum can be accelerated by smoothly increasing the instance difficulty, and the multiagents provide a regularization effect. 3.4 Re-labeling Study To gain insight into the proposed method, we conduct a statistic on the final re-labeled instances. Figure 3 reports the results and shows that our approach identifies some noisy instances including both positives and negatives, and leverage them in a fine-grained manner comparing with discard-orretain strategy. Besides, the instances which are re-labeled from negatives to positives take a larger proportion than those with inverse re-labeling assignments, especially on Wiki-KBP data. This is in accordance with the fact that many noisy labels are “None” in DS setting. Note that some instances are re-labeled with divergent evaluations between entity-view and relation-view agents, which are usually get low confidences through the consensus module and have a small impact on the optimization with damping losses. We further sample two sentences to illustrate the re-labeling processes. On Table 5, the first sentence has a noisy relation label None, while the relation extractor recognizes it as country of birth relation. Based on the extracted type, the relation-view agent evaluates it as a confidential positive instance due to the typical pattern “born in” in the sentence. The entity-view agents also evaluate it as positive with relatively lower confidences, and finally the sentence is re-labeled as positive by the consensus module. For the second sentence, agents disagree that it is positive. With the help of diverse extraction information, the consensus module re-labels the instance with low confidence score, and further alleviates the performance harm by loss damping. 4 Related Works Many entity and relation extraction methods have been proposed with the pipelined fashion, i.e., perform named entity recognition (NER) first and then relation classification. Traditional NER systems usually focus on a few predefined types with supervised learning (Yosef et al., 2012). However, the expensive human annotation blocks the large-scale training data construction. Recently, several efforts on DS and weak supervision (WS) NER extraction have been made to address the training data bottleneck (Yogatama et al., 2015; Yang et al., 2018). For relation extraction, there are also many DS methods (Mintz et al., 2009; Min et al., 2013; Zeng et al., 2015; Han and Sun, 2016; Ji et al., 2017; Lei et al., 2018) and WS methods (Jiang, 2009; Ren et al., 2016; Deng et al., 2019) to address the limitation of supervised methods. Our method can be applied for a large number of those extractors as a post-processing plugin since the DS and WS usually incorporate many noises. A recent work CrossWeigh (Wang et al., 2019) estimates the label mistakes and adjusts the weights of sentences in the NER benchmark CoNLL03. They focus on the noises of supervised “gold standard” labels while we focus on the noises of automatically constructed “silver standard” labels. Moreover, we deal with the noises by considering the shifted label distribution problem, which is overlooked by most existing DS works. In Ye et al. (2019), this issue is analyzed and authors improve performance significantly by using the distribution information from test set. In this paper, we propose to use RL to explore suitable label distributions by re-distributing the training set with confidencescored labels, which is practical and robust to label distribution shift since we may not know the distribution of test set in real-world applications. Another extraction manner is joint extraction, such as methods based on neural network with parameter sharing (Miwa and Bansal, 2016), represen5948 Sentence 1, False Negative, Label: (Bashardost[/person], None, Ghazni[/location]) Entity Extractor Relation Extractor Entity Agents Relation Agent Confidence Consensus Bashardost, an ethnic Hazara, was born in Ghazni province to a family of government employees. Bashardost[/person] Ghazni[/location] country of birth 0.772 0.729 0.896 Positive (0.799) Sentence 2, False Positive, Label: (profilin[/Protein], POS ACTION Physical, actin[/Protein]) Acanthamoeba profilin affects the mechanical properties of nonfilamentous actin. profilin[/None] actin[/Protein] None 0.373 0.791 0.236 Negative (0.533) Table 5: Confidence evaluations on two noisy instances using MRL-CoType. tation learning (Ren et al., 2017) and new tagging scheme (Zheng et al., 2017). However, these works perform extraction without explicitly handling the noises. Our approach introduces multiagents to the joint extraction task and explicitly model sentence confidences. As for the RL-based methods, in Zeng et al. (2018), RL agent is introduced as bag-level relation predictor. Qin et al. (2018) and Feng et al. (2018) use agent as instance selectors to discard noisy instances in sentence-level. Different from adopting a binary action strategy and only focus on false positives in these works, we adopt a continuous action space (confidence evaluation) and handle the noises in a fine-grained manner. The binary selection strategy is also adopted in a related study, Reinforced Co-Training (Wu et al., 2018), which uses an agent to select instances and help classifiers to form auto-labeled datasets. An important difference is that they select unlabeled instances while we evaluate noisy instances and relabel them. More recently, HRL (Takanobu et al., 2019) uses a hierarchical agent to first identifies relation indicators and then entities. Different from using one task-switching agent of this work, we leverage a group of multiagents, which can be a pluggable helper to existing extraction models. 5 Conclusions To deal with the noise labels and accompanying shifted label distribution problem in distant supervision, in this paper, we propose a novel method to jointly extract entity and relation through a group of cooperative multiagents. To make full use of each instance, each agent evaluates the instance confidence from different views, and then a confidence consensus module is designed to re-label noisy instances with confidences. Thanks to the exploration of suitable label distribution by RL agents, the confidences are further used to adjust the training losses of extractors and the potential harm caused by noisy instances can be alleviated. To demonstrate the effectiveness of the proposed method, we evaluate it on two real-world datasets and the results confirm that the proposed method can significantly improve extractor performance and achieve effective learning. Acknowledgements This work is supported by the National Natural Science Foundation of China (No.61602013), and the Shenzhen General Research Project (No. JCYJ20190808182805919). References Yoshua Bengio, J´erˆome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning., pages 41–48. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in neural information processing systems., pages 2787–2795. Yang Deng, Yaliang Li, Ying Shen, Nan Du, Wei Fan, Min Yang, and Kai Lei. 2019. Medtruth: A semisupervised approach to discovering knowledge condition information from multi-source medical data. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 719–728. Jun Feng, Minlie Huang, Li Zhao, Yang Yang, and Xiaoyan Zhu. 2018. Reinforcement learning for relation classification from noisy data. In Thirty-Second AAAI Conference on Artificial Intelligence. Matthew R Gormley, Mo Yu, and Mark Dredze. 2015. Improved relation extraction with feature-rich compositional embedding models. Proceedings of the 2015 joint conference on empirical methods in natural language processing and computational natural language learning, pages 1774—-1784. Xianpei Han and Le Sun. 2016. Global distant supervision for relation extraction. In Proceedings of the AAAI Conference on Artificial Intelligence. 5949 Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 541–550. Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. 2017. Distant supervision for relation extraction with sentence-level attention and entity descriptions. In Thirty-Second AAAI Conference on Artificial Intelligence. Wei Jia, Dai Dai, Xinyan Xiao, and Hua Wu. 2019. Arnor: Attention regularization based noise reduction for distant supervision relation classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Jing Jiang. 2009. Multi-task transfer learning for weakly-supervised relation extraction. In Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics, pages 1012–1020. Kai Lei, Daoyuan Chen, Yaliang Li, Nan Du, Min Yang, Wei Fan, and Ying Shen. 2018. Cooperative denoising for distantly supervised relation extraction. In Proceedings of the 27th International Conference on Computational Linguistics., pages 426– 436. Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the 52th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 402–412. Eric Liang, Richard Liaw, Robert Nishihara, Philipp Moritz, Roy Fox, Ken Goldberg, Joseph E. Gonzalez, Michael I. Jordan, and Ion Stoica. 2018. RLlib: Abstractions for distributed reinforcement learning. In Proceedings of the 35th annual international conference on machine learning. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2124–2133. Xiao Ling and Daniel S Weld. 2012. Fine-grained entity recognition. In Twenty-Sixth AAAI Conference on Artificial Intelligence, volume 12, pages 94–100. Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek. 2013. Distant supervision for relation extraction with an incomplete knowledge base. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 777–782. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics, pages 1003–1011. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1105—-1116. Sampo Pyysalo, Filip Ginter, Juho Heimonen, Jari Bj¨orne, Jorma Boberg, Jouni J¨arvinen, and Tapio Salakoski. 2007. Bioinfer: a corpus for information extraction in the biomedical domain. BMC bioinformatics, 8(1):50. Pengda Qin, Weiran Xu, and William Yang Wang. 2018. Robust distant supervision relation extraction via deep reinforcement learning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Xiang Ren, Wenqi He, Meng Qu, Clare R Voss, Heng Ji, and Jiawei Han. 2016. Label noise reduction in entity typing by heterogeneous partial-label embedding. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining., pages 1825–1834. Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R Voss, Heng Ji, Tarek F Abdelzaher, and Jiawei Han. 2017. Cotype: Joint extraction of typed entities and relations with knowledge bases. In WWW, pages 1015–1024. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. 2015. Highdimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, pages 455–465. Ryuichi Takanobu, Tianyang Zhang, Jiexi Liu, and Minlie Huang. 2019. A hierarchical framework for relation extraction with reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence. Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724– 2743. 5950 Zihan Wang, Jingbo Shang, Liyuan Liu, Lihao Lu, Jiacheng Liu, and Jiawei Han. 2019. Crossweigh: Training named entity tagger from imperfect annotations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5157–5166. Jiawei Wu, Lei Li, and William Yang Wang. 2018. Reinforced co-training. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1252–1262. Yaosheng Yang, Wenliang Chen, Zhenghua Li, Zhengqiu He, and Min Zhang. 2018. Distantly supervised NER with partial annotation learning and reinforcement learning. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2159–2169. Qinyuan Ye, Liyuan Liu, Maosen Zhang, and Xiang Ren. 2019. Looking beyond label noise: Shifted label distribution matters in distantly supervised relation extraction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3832–3841. Dani Yogatama, Daniel Gillick, and Nevena Lazic. 2015. Embedding methods for fine grained entity type classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 291–296. Mohamed Amir Yosef, Sandro Bauer, Johannes Hoffart, Marc Spaniol, and Gerhard Weikum. 2012. Hyena: Hierarchical type classification for entity names. Proceedings of the 21th International Conference on Computational Linguistics, pages 1361– 1370. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of the 2015 joint conference on empirical methods in natural language processing and computational natural language learning, pages 1753– 1762. Xiangrong Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2018. Large scaled relation extraction with reinforcement learning. In Thirty-Second AAAI Conference on Artificial Intelligence. Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extraction of entities and relations based on a novel tagging scheme. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1227–1236.
2020
527
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5951–5960 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5951 Simplify the Usage of Lexicon in Chinese NER Ruotian Ma1∗, Minlong Peng1∗, Qi Zhang1,3, Zhongyu Wei2,3, Xuanjing Huang1 1Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University 2School of Data Science, Fudan University 3Research Institute of Intelligent and Complex Systems, Fudan University {rtma19,mlpeng16,qz,zywei,xjhuang}@fudan.edu.cn Abstract Recently, many works have tried to augment the performance of Chinese named entity recognition (NER) using word lexicons. As a representative, Lattice-LSTM (Zhang and Yang, 2018) has achieved new benchmark results on several public Chinese NER datasets. However, Lattice-LSTM has a complex model architecture. This limits its application in many industrial areas where real-time NER responses are needed. In this work, we propose a simple but effective method for incorporating the word lexicon into the character representations. This method avoids designing a complicated sequence modeling architecture, and for any neural NER model, it requires only subtle adjustment of the character representation layer to introduce the lexicon information. Experimental studies on four benchmark Chinese NER datasets show that our method achieves an inference speed up to 6.15 times faster than those of state-ofthe-art methods, along with a better performance. The experimental results also show that the proposed method can be easily incorporated with pre-trained models like BERT. 1 1 Introduction Named Entity Recognition (NER) is concerned with the identification of named entities, such as persons, locations, and organizations, in unstructured text. NER plays an important role in many downstream tasks, including knowledge base construction (Riedel et al., 2013), information retrieval (Chen et al., 2015), and question answering (Diefenbach et al., 2018). In languages where words are naturally separated (e.g., English), NER has been conventionally formulated as a sequence ∗Equal contribution. 1The source code of this paper is publicly available at https://github.com/v-mipeng/ LexiconAugmentedNER. labeling problem, and the state-of-the-art results have been achieved using neural-network-based models (Huang et al., 2015; Chiu and Nichols, 2016; Liu et al., 2018). Compared with NER in English, Chinese NER is more difficult since sentences in Chinese are not naturally segmented. Thus, a common practice for Chinese NER is to first perform word segmentation using an existing CWS system and then apply a word-level sequence labeling model to the segmented sentence (Yang et al., 2016; He and Sun, 2017b). However, it is inevitable that the CWS system will incorrectly segment query sentences. This will result in errors in the detection of entity boundary and the prediction of entity category in NER. Therefore, some approaches resort to performing Chinese NER directly at the character level, which has been empirically proven to be effective (He and Wang, 2008; Liu et al., 2010; Li et al., 2014; Liu et al., 2019; Sui et al., 2019; Gui et al., 2019b; Ding et al., 2019). A drawback of the purely character-based NER method is that the word information is not fully exploited. With this consideration, Zhang and Yang, (2018) proposed Lattice-LSTM for incorporating word lexicons into the character-based NER model. Moreover, rather than heuristically choosing a word for the character when it matches multiple words in the lexicon, the authors proposed to preserve all words that match the character, leaving the subsequent NER model to determine which word to apply. To realize this idea, they introduced an elaborate modification to the sequence modeling layer of the LSTM-CRF model (Huang et al., 2015). Experimental studies on four Chinese NER datasets have verified the effectiveness of Lattice-LSTM. However, the model architecture of LatticeLSTM is quite complicated. In order to introduce lexicon information, Lattice-LSTM adds several additional edges between nonadjacent characters 5952 in the input sequence, which significantly slows its training and inference speeds. In addition, it is difficult to transfer the structure of LatticeLSTM to other neural-network architectures (e.g., convolutional neural networks and transformers) that may be more suitable for some specific tasks. In this work, we propose a simpler method to realize the idea of Lattice-LSTM, i.e., incorporating all the matched words for each character to a character-based NER model. The first principle of our model design is to achieve a fast inference speed. To this end, we propose to encode lexicon information in the character representations, and we design the encoding scheme to preserve as much of the lexicon matching results as possible. Compared with Lattice-LSTM, our method avoids the need for a complicated model architecture, is easier to implement, and can be quickly adapted to any appropriate neural NER model by adjusting the character representation layer. In addition, ablation studies show the superiority of our method in incorporating more complete and distinct lexicon information, as well as introducing a more effective word-weighting strategy. The contributions of this work can be summarized as follows: • We propose a simple but effective method for incorporating word lexicons into the character representations for Chinese NER. • The proposed method is transferable to different sequence-labeling architectures and can be easily incorporated with pre-trained models like BERT (Devlin et al., 2018). We performed experiments on four public Chinese NER datasets. The experimental results show that when implementing the sequence modeling layer with a single-layer Bi-LSTM, our method achieves considerable improvements over the state-of-theart methods in both inference speed and sequence labeling performance. 2 Background In this section, we introduce several previous works that influenced our work, including the Softword technique and Lattice-LSTM. 2.1 Softword Feature The Softword technique was originally used for incorporating word segmentation information into downstream tasks (Zhao and Kit, 2008; Peng and Dredze, 2016). It augments the character representation with the embedding of its corresponding segmentation label: xc j ←[xc j; eseg(seg(cj))]. (1) Here, seg(cj) ∈Yseg denotes the segmentation label of the character cj predicted by the word segmentor, eseg denotes the segmentation label embedding lookup table, and typically Yseg = {B, M, E, S}. However, gold segmentation is not provided in most datasets, and segmentation results obtained by a segmenter can be incorrect. Therefore, segmentation errors will inevitably be introduced through this approach. 2.2 Lattice-LSTM Lattice-LSTM designs to incorporate lexicon information into the character-based neural NER model. To achieve this purpose, lexicon matching is first performed on the input sentence. If the subsequence {ci, · · · , cj} of the sentence matches a word in the lexicon for i < j, a directed edge is added from ci to cj. All lexicon matching results related to a character are preserved by allowing the character to be connected with multiple other characters. Intrinsically, this practice converts the input form of a sentence from a chain into a graph. In a normal LSTM layer, the hidden state hi and the memory cell ci of each time step is updated by: hi, ci = f(hj−1, cj−1, xc j), (2) However, in order to model the graph-based input, Lattice-LSTM introduces an elaborate modification to the normal LSTM. Specifically, let s<∗,j> denote the list of sub-sequences of sentence s that match the lexicon and end with cj, h<∗,j> denote the corresponding hidden state list {hi, ∀s<i,j> ∈ s<∗,j>}, and c<∗,j> denote the corresponding memory cell list {ci, ∀s<i,j> ∈s<∗,j>}. In Lattice-LSTM, the hidden state hj and memory cell cj of cj are now updated as follows: hj, cj = f(hj−1, cj−1, xc j, s<∗,j>, h<∗,j>, c<∗,j>), (3) where f is a simplified representation of the function used by Lattice-LSTM to perform memory update. From our perspective, there are two main advantages to Lattice-LSTM. First, it preserves all the possible lexicon matching results that are related to 5953 a character, which helps avoid the error propagation problem introduced by heuristically choosing a single matching result for each character. Second, it introduces pre-trained word embeddings to the system, which greatly enhances its performance. However, efficiency problems exist in LatticeLSTM. Compared with normal LSTM, LatticeLSTM needs to additionally model s<∗,j>, h<∗,j>, and c<∗,j> for memory update, which slows the training and inference speeds. Additionally, due to the complicated implementation of f, it is difficult for Lattice-LSTM to process multiple sentences in parallel (in the published implementation of Lattice-LSTM, the batch size was set to 1). These problems limit its application in some industrial areas where real-time NER responses are needed. 3 Approach In this work, we sought to retain the merits of Lattice-LSTM while overcoming its drawbacks. To this end, we propose a novel method in which lexicon information is introduced by simply adjusting the character representation layer of an NER model. We refer to this method as SoftLexicon. As shown in Figure 1, the overall architecture of the proposed method is as follows. First, each character of the input sequence is mapped into a dense vector. Next, the SoftLexicon feature is constructed and added to the representation of each character. Then, these augmented character representations are put into the sequence modeling layer and the CRF layer to obtain the final predictions. 3.1 Character Representation Layer For a character-based Chinese NER model, the input sentence is seen as a character sequence s = {c1, c2, · · · , cn} ∈Vc, where Vc is the character vocabulary. Each character ci is represented using a dense vector (embedding): xc i = ec(ci), (4) where ec denotes the character embedding lookup table. Char + bichar. In addition, Zhang and Yang, (2018) has proved that character bigrams are useful for representing characters, especially for those methods not using word information. Therefore, it is common to augment the character representations with bigram embeddings: xc i = [ec(ci); eb(ci, ci+1)], (5) Match in the lexicon 中 国 语 ⾔ 学 语⾔ (Language) 语⾔学 (Linguistic) 国语 (National language) 中国语⾔ (Chinese language) 中国语 (Chinese language) 语 (Language; Say) C3 语⾔f3,4 语⾔学f3,5 中国语⾔f1,4 国语 f2,3 中国语f1,3 语f3,3 weight B M E S ⨁ ⨁ ⨁ ⨁ C4 B M E S ⨁ ⨁ ⨁ ⨁ C5 B M E S ⨁ ⨁ ⨁ ⨁ C2 B M E S ⨁ ⨁ ⨁ ⨁ C1 B M E S ⨁ ⨁ ⨁ ⨁ Bi-LSTM / CNN / Transformer layer B-LOC E-LOC O O O Char emb SoftLexicon feature Concatenation Sequence encoding layer CRF layer Predictions Input sequence … … … Figure 1: The overall architecture of the proposed method. where eb denotes the bigram embedding lookup table. 3.2 Incorporating Lexicon Information The problem with the purely character-based NER model is that it fails to exploit word information. To address this issue, we proposed two methods, as described below, to introduce the word information into the character representations. In the following, for any input sequence s = {c1, c2, · · · , cn}, wi,j denotes its sub-sequence {ci, ci+1, · · · , cj}. ExSoftword Feature The first conducted method is an intuitive extension of the Softword method, called ExSoftword. Instead of choosing one segmentation result for each character, it proposes to retain all possible segmentation results obtained using the lexicon: xc j ←[xc j; eseg(segs(cj)], (6) where segs(cj) denotes all segmentation labels related to cj, and eseg(segs(cj)) is a 5-dimensional multi-hot vector with each dimension corresponding to an item of {B, M, E, S, O}. As an example presented in Figure 2, the character c7 (“西”) occurs in two words, w5,8 (“中山西路”) and w6,7 (“山西”), that match the lexicon, and it occurs in the middle of “中山 西路” and the end of “山西”. Therefore, its corresponding segmentation result is {M, E}, and its character representation is enriched as follows: xc 7 ←[xc 7; eseg({M, E})]. (7) 5954 中 Zhong 山 Hill 西 East 路 Road 在 On 住 Live 明 Ming 李 Li 中山 Zhongshan City 中山西路 West Zhongshan Road 李明 Ming Li (person name) c𝟏 𝒄2 𝒄8 𝒄7 𝒄6 𝒄5 𝒄4 𝒄3 𝒘𝟔,𝟕 𝒘𝟓,𝟔 𝒘𝟓,𝟖 𝒘𝟏,𝟐 { 𝐁} { 𝐁, 𝐌, 𝐄} { 𝐌, 𝐄} { 𝐄} 中山西 East Zhongshan City 山西路 Shanxi Road 中山 Zhongshan City 山西 Shanxi Province 𝒘𝟓,𝟔 𝒘𝟓,𝟕 𝒘𝟔,𝟖 Cannot restore! ExSoftword method Figure 2: The ExSoftword method. Here, the second and third dimensions of eseg(·) are set to 1, and the rest dimensions are set to 0. The problem of this approach is that it cannot fully inherit the two merits of Lattice-LSTM. First, it fails to introduce pre-trained word embeddings. Second, it still losses information of the matching results. As shown in Figure 2, the constructed ExSoftword feature for characters {c5, c6, c7, c8} is {{B}, {B, M, E}, {M, E}, {E}}. However, given this constructed sequence, there exists more than one corresponding matching results, such as {w5,6 (“中山”), w5,7 (“中山西”), w6,8 (“山西路”)} and {w5,6 (“中山”), w6,7 (“山西”), w5,8 (“中山西 路”)}. Therefore, we cannot tell which is the correct result to be restored. SoftLexicon Based on the analysis on Exsoftword, we further developed the SoftLexicon method to incorporate the lexicon information. The SoftLexicon features are constructed in three steps. Categorizing the matched words. First, to retain the segmentation information, all matched words of each character ci is categorized into four word sets “BMES”, which is marked by the four segmentation labels. For each character ci in the input sequence = {c1, c2, · · · , cn}, the four set is constructed by: B(ci) = {wi,k, ∀wi,k ∈L, i < k ≤n}, M(ci) = {wj,k, ∀wj,k ∈L, 1 ≤j < i < k ≤n}, E(ci) = {wj,i, ∀wj,i ∈L, 1 ≤j < i}, S(ci) = {ci, ∃ci ∈L}. (8) Here, L denotes the lexicon we use in this work. Additionally, if a word set is empty, a special word “NONE” is added to the empty word set. An example of this categorization approach is shown in Figure 3. Noted that in this way, not only we 李 Li 𝐁= "山西" 𝐌= {"中山西路"} 𝐄= "中山" 𝐒= "山" 𝐁= "𝑵𝒐𝒏𝒆" 𝐌= "𝑵𝒐𝒏𝒆" 𝐄= "中山西路" 𝐒= "路" Soft-lexicon method 中 Zhong 山 Hill 西 East 路 Road 在 On 住 Live 明 Ming 中山 Zhongshan City 中山西路 West Zhongshan Road 李明 Ming Li (person name) c𝟏 𝒄2 𝒄8 𝒄7 𝒄6 𝒄5 𝒄4 𝒄3 𝒘𝟔,𝟕 𝒘𝟓,𝟔 𝒘𝟓,𝟖 𝒘𝟏,𝟐 山西 Shanxi Province Figure 3: The SoftLexicon method. can introduce the word embedding, but also no information loss exists since the matching results can be exactly restored from the four word sets of the characters. Condensing the word sets. After obtaining the “BMES” word sets for each character, each word set is then condensed into a fixed-dimensional vector. In this work, we explored two approaches for implementing this condensation. The first implementation is the intuitive meanpooling method: vs(S) = 1 |S| X w∈S ew(w). (9) Here, S denotes a word set and ew denotes the word embedding lookup table. However, as shown in Table 8, the results of empirical studies revealed that this algorithm does not perform well. Therefore, a weighting algorithm is introduced to further leverage the word information. To maintain computational efficiency, we did not opt for a dynamic weighting algorithm like attention. Instead, we propose using the frequency of each word as an indication of its weight. Since the frequency of a word is a static value that can be obtained offline, this can greatly accelerate the calculation of the weight of each word. Specifically, let z(w) denote the frequency that a lexicon word w occurs in the statistical data, the weighted representation of the word set S is obtained as follows: vs(S) = 4 Z X w∈S z(w)ew(w), (10) where Z = X w∈B∪M∪E∪S z(w). 5955 Here, weight normalization is performed on all words in the four word sets to make an overall comparison. In this work, the statistical data set is constructed from a combination of training and developing data of the task. Of course, if there is unlabelled data in the task, the unlabeled data set can serve as the statistical data set. In addition, note that the frequency of w does not increase if w is covered by another sub-sequence that matches the lexicon. This prevents the problem in which the frequency of a shorter word is always less than the frequency of the longer word that covers it. Combining with character representation. The final step is to combine the representations of four word sets into one fix-dimensional feature, and add it to the representation of each character. In order to retain as much information as possible, we choose to concatenate the representations of the four word sets, and the final representation of each character is obtained by: es (B, M, E, S) = [vs(B); vs(M); vs(E); vs(S)], xc ←[xc; es(B, M, E, S)]. (11) Here, vs denotes the weighting function above. 3.3 Sequence Modeling Layer With the lexicon information incorporated, the character representations are then put into the sequence modeling layer, which models the dependency between characters. Generic architectures for this layer including the bidirectional longshort term memory network(BiLSTM), the Convolutional Neural Network(CNN) and the transformer(Vaswani et al., 2017). In this work, we implemented this layer with a single-layer BiLSTM. Here, we precisely show the definition of the forward LSTM:   it ft ot ect  =   σ σ σ tanh    W  xc t ht−1  + b  , ct = ect ⊙it + ct−1 ⊙ft, ht = ot ⊙tanh(ct). (12) where σ is the element-wise sigmoid function and ⊙represents element-wise product. W and b are trainable parameters. The backward LSTM shares the same definition as the forward LSTM Datasets Type Train Dev Test OntoNotes Sentence 15.7k 4.3k 4.3k Char 491.9k 200.5k 208.1k MSRA Sentence 46.4k 4.4k Char 2169.9k 172.6k Weibo Sentence 1.4k 0.27k 0.27k Char 73.8k 14.5 14.8k Resume Sentence 3.8k 0.46 0.48k Char 124.1k 13.9k 15.1k Table 1: Statistics of datasets. yet model the sequence in a reverse order. The concatenated hidden states at the ith step of the forward and backward LSTMs hi = [−→h i; ←−h i] forms the context-dependent representation of ci. 3.4 Label Inference Layer On top of the sequence modeling layer, it is typical to apply a sequential conditional random field (CRF) (Lafferty et al., 2001) layer to perform label inference for the whole character sequence at once: p(y|s; θ) = Qn t=1 φt(yt−1, yt|s) P y′∈Ys Qn t=1 φt(y′ t−1, y′ t|s). (13) Here, Ys denotes all possible label sequences of s, and φt(y′, y|s) = exp(wT y′,yht + by′,y), where wy′,y and by′,y are trainable parameters corresponding to the label pair (y′, y), and θ denotes model parameters. For label inference, it searches for the label sequence y∗with the highest conditional probability given the input sequence s: y∗=y p(y|s; θ), (14) which can be efficiently solved using the Viterbi algorithm (Forney, 1973). 4 Experiments 4.1 Experiment Setup Most experimental settings in this work followed the protocols of Lattice-LSTM (Zhang and Yang, 2018), including tested datasets, compared baselines, evaluation metrics (P, R, F1), and so on. To make this work self-completed, we concisely illustrate some primary settings of this work. Datasets The methods were evaluated on four Chinese NER datasets, including OntoNotes (Weischedel et al., 2011), MSRA (Levow, 2006), Weibo NER (Peng 5956 Models OntoNotes MSRA Weibo Resume Lattice-LSTM 1× 1× 1× 1× LR-CNN (Gui et al., 2019) 2.23× 1.57× 2.41× 1.44× BERT-tagger 2.56× 2.55× 4.45× 3.12× BERT + LSTM + CRF 2.77× 2.32× 2.84× 2.38× SoftLexicon (LSTM) 6.15× 5.78× 6.10× 6.13× SoftLexicon (LSTM) + bichar 6.08× 5.95× 5.91× 6.45× SoftLexicon (LSTM) + BERT 2.74× 2.33× 2.85× 2.32× Table 2: Inference speed (average sentences per second, the larger the better) of our method with LSTM layer compared with Lattice-LSTM, LR-CNN and BERT. and Dredze, 2015; He and Sun, 2017a), and Resume NER (Zhang and Yang, 2018). OntoNotes and MSRA are from the newswire domain, where gold-standard segmentation is available for training data. For OntoNotes, gold segmentation is also available for development and testing data. Weibo NER and Resume NER are from social media and resume, respectively. There is no gold standard segmentation in these two datasets. Table 1 shows statistic information of these datasets. As for the lexicon, we used the same one as Lattice-LSTM, which contains 5.7k single-character words, 291.5k two-character words, 278.1k three-character words, and 129.1k other words. In addition, the pretrained character embeddings we used are also the same with Lattice-LSTM, which are pre-trained on Chinese Giga-Word using word2vec. Implementation Detail In this work, we implement the sequence-labeling layer with Bi-LSTM. Most implementation details followed those of Lattice-LSTM, including character and word embedding sizes, dropout, embedding initialization, and LSTM layer number. Additionally, the hidden size was set to 200 for small datasets Weibo and Resume, and 300 for larger datasets OntoNotes and MSRA. The initial learning rate was set to 0.005 for Weibo and 0.0015 for the rest three datasets with Adamax (Kingma and Ba, 2014) step rule 2. 4.2 Computational Efficiency Study Table 2 shows the inference speed of the SoftLexicon method when implementing the sequence modeling layer with a bi-LSTM layer. The speed was evaluated based on the average number of sentences processed by the model per second using a GPU (NVIDIA TITAN X). From the 2Please refer to the attached source code for more implementation detail of this work and access https:// github.com/jiesutd/LatticeLSTM for pre-trained word and character embeddings. 20 40 60 80 100 Sentence length 0 25 50 75 100 125 150 175 Inference speed (st/s) Soft-lexicon Lattice-LSTM LR_CNN Figure 4: Inference speed against sentence length. We use a same batch size of 1 for a fair speed comparison. table, we can observe that when decoding with the same batch size (=1), the proposed method is considerably more efficient than Lattice-LSTM and LR-CNN, performing up to 6.15 times faster than Lattice-LSTM. The inference speeds of SoftLexicon(LSTM) with bichar are close to those without bichar, since we only concatenate an additional feature to the character representation. The inference speeds of the BERT-Tagger and SoftLexicon (LSTM) + BERT models are limited due to the deep layers of the BERT structure. However, the speeds of the SoftLexicon (LSTM) + BERT model are still faster than those of LatticeLSTM and LR-CNN on all datasets. To further illustrate the efficiency of the SoftLexicon method, we also conducted an experiment to evaluate its inference speed against sentences of different lengths, as shown in Table 4. For a fair comparison, we set the batch size to 1 in all of the compared methods. The results show that the proposed method achieves significant improvement in speed over Lattice-LSTM and LR-CNN when processing short sentences. With the increase of sentence length, the proposed method is consistently faster than Lattice-LSTM and LR-CNN despite the speed degradation due to the recurrent architecture of LSTM. Overall, the proposed SoftLexicon method shows a great advantage over other methods in computational efficiency. 4.3 Effectiveness Study Tables 3−63 show the performances of our method against the compared baselines. In this study, the sequence modeling layer of our method was 3In Table 3−5, ∗indicates that the model uses external labeled data for semi-supervised learning. † means that the model also uses discrete features. 5957 Input Models P R F1 Gold seg Yang et al., 2016 65.59 71.84 68.57 Yang et al., 2016∗† 72.98 80.15 76.40 Che et al., 2013∗ 77.71 72.51 75.02 Wang et al., 2013∗ 76.43 72.32 74.32 Word-based (LSTM) 76.66 63.60 69.52 + char + bichar 78.62 73.13 75.77 Auto seg Word-based (LSTM) 72.84 59.72 65.63 + char + bichar 73.36 70.12 71.70 No seg Char-based (LSTM) 68.79 60.35 64.30 + bichar + softword 74.36 69.43 71.89 + ExSoftword 69.90 66.46 68.13 + bichar + ExSoftword 73.80 71.05 72.40 Lattice-LSTM 76.35 71.56 73.88 LR-CNN (Gui et al., 2019) 76.40 72.60 74.45 SoftLexicon (LSTM) 77.28 74.07 75.64 SoftLexicon (LSTM) + bichar 77.13 75.22 76.16 BERT-Tagger 76.01 79.96 77.93 BERT + LSTM + CRF 81.99 81.65 81.82 SoftLexicon (LSTM) + BERT 83.41 82.21 82.81 Table 3: Performance on OntoNotes. A model followed by (LSTM) (e.g., Proposed (LSTM)) indicates that its sequence modeling layer is LSTM-based. implemented with a single layer bidirectional LSTM. OntoNotes. Table 3 shows results 4 on the OntoNotes dataset, where gold word segmentation is provided for both training and testing data. The methods of the “Gold seg” and the “Auto seg” groups are all word-based, with the former input building on gold word segmentation results and the latter building on automatic word segmentation results by a segmenter trained on OntoNotes training data. The methods used in the “No seg” group are character-based. From the table, we can make several observations. First, when gold word segmentation was replaced by automatically generated word segmentation, the F1 score decreases from 75.77% to 71.70%. This reveals the problem of treating the predicted word segmentation result as the true result in the word-based Chinese NER. Second, the F1 score of the Char-based (LSTM)+ExSoftword model is greatly improved from that of the Char-based (LSTM) model. This indicates the feasibility of the naive ExSoftword method. However, it still greatly underperforms relative to Lattice-LSTM, which reveals its deficiency in utilizing word information. Lastly, the proposed SoftLexicon method outperforms Lattice-LSTM by 1.76% with respect to the F1 score, and obtains a greater improvement of 2.28% combining the bichar 4A result in boldface indicates that it is statistically significantly better (p < 0.01 in pairwise t−test) than the others in the same box. Models P R F1 Chen et al., 2006 91.22 81.71 86.20 Zhang et al. 2006∗ 92.20 90.18 91.18 Zhou et al. 2013 91.86 88.75 90.28 Lu et al. 2016 87.94 Dong et al. 2016 91.28 90.62 90.95 Char-based (LSTM) 90.74 86.96 88.81 + bichar+softword 92.97 90.80 91.87 + ExSoftword 90.77 87.23 88.97 + bichar+ExSoftword 93.21 91.57 92.38 Lattice-LSTM 93.57 92.79 93.18 LR-CNN (Gui et al., 2019) 94.50 92.93 93.71 SoftLexicon (LSTM) 94.63 92.70 93.66 SoftLexicon (LSTM) + bichar 94.73 93.40 94.06 BERT-Tagger 93.40 94.12 93.76 BERT + LSTM + CRF 95.06 94.61 94.83 SoftLexicon (LSTM) + BERT 95.75 95.10 95.42 Table 4: Performance on MSRA. Models NE NM Overall Peng and Dredze, 2015 51.96 61.05 56.05 Peng and Dredze, 2016∗ 55.28 62.97 58.99 He and Sun, 2017a 50.60 59.32 54.82 He and Sun, 2017b∗ 54.50 62.17 58.23 Char-based (LSTM) 46.11 55.29 52.77 + bichar+softword 50.55 60.11 56.75 + ExSoftword 44.65 55.19 52.42 + bichar+ExSoftword 58.93 53.38 56.02 Lattice-LSTM 53.04 62.25 58.79 LR-CNN (Gui et al., 2019) 57.14 66.67 59.92 SoftLexicon (LSTM) 59.08 62.22 61.42 SoftLexicon (LSTM) + bichar 58.12 64.20 59.81 BERT-Tagger 65.77 62.05 63.80 BERT + LSTM + CRF 69.65 64.62 67.33 SoftLexicon (LSTM) + BERT 70.94 67.02 70.50 Table 5: Performance on Weibo. NE, NM and Overall denote F1 scores for named entities, nominal entities (excluding named entities) and both, respectively. feature. It even performs comparably with the word-based methods of the “Gold seg” group, verifying its effectiveness on OntoNotes. MSRA/Weibo/Resume. Tables 4, 5 and 6 show results on the MSRA, Weibo and Resume datasets, respectively. Compared methods include the best statistical models on these data set, which leveraged rich handcrafted features (Chen et al., 2006; Zhang et al., 2006; Zhou et al., 2013), character embedding features (Lu et al., 2016; Peng and Dredze, 2016), radical features (Dong et al., 2016), cross-domain data, and semi-supervised data (He and Sun, 2017b). From the tables, we can see that the performance of the proposed Softlexion method is significant better than that of Lattice-LSTM and other baseline methods on all three datasets. 5958 Models P R F1 Word-based (LSTM) 93.72 93.44 93.58 +char+bichar 94.07 94.42 94.24 Char-based (LSTM) 93.66 93.31 93.48 + bichar+softword 94.53 94.29 94.41 + ExSoftword 95.29 94.42 94.85 + bichar+ExSoftword 96.14 94.72 95.43 Lattice-LSTM 94.81 94.11 94.46 LR-CNN (Gui et al., 2019) 95.37 94.84 95.11 SoftLexicon (LSTM) 95.30 95.77 95.53 SoftLexicon (LSTM) + bichar 95.71 95.77 95.74 BERT-Tagger 94.87 96.50 95.68 BERT + LSTM + CRF 95.75 95.28 95.51 SoftLexicon (LSTM) + BERT 96.08 96.13 96.11 Table 6: Performance on Resume. Models OntoNotes MSRA Weibo Resume SoftLexicon (LSTM) 75.64 93.66 61.42 95.53 ExSoftword (CNN) 68.11 90.02 53.93 94.49 SoftLexicon (CNN) 74.08 92.19 59.65 95.02 ExSoftword (Transformer) 64.29 86.29 52.86 93.78 SoftLexicon (Transformer) 71.21 90.48 61.04 94.59 Table 7: F1 score with different implementations of the sequence modeling layer. ExSoftword is the shorthand of Char-based+bichar+ExSoftword. 4.4 Transferability Study Table 7 shows the performance of the SoftLexicon method when implementing the sequence modeling layer with different neural architecture. From the table, we can first see that the LSTM-based architecture performed better than the CNN- and transformer- based architectures. In addition, our method with different sequence modeling layers consistently outperformed their corresponding ExSoftword baselines. This confirms the superiority of our method in modeling lexicon information in different neural NER models. 4.5 Combining Pre-trained Model We also conducted experiments on the four datasets to further verify the effectiveness of SoftLexicon in combination with pre-trained model, the results of which are shown in Tables 3−6. In these experiments, we first use a BERT encoder to obtain the contextual representations of each sequenc, and then concatenated them into the character representations. From the table, we can see that the SoftLexicon method with BERT outperforms the BERT tagger on all four datasets. These results show that the SoftLexicon method can be effectively combined with pre-trained model. Moreover, the results also verify the effectiveness of our method in utilizing lexicon information, Models OntoNotes MSRA Weibo Resume SoftLexicon (LSTM) 75.64 93.66 61.42 95.53 - “M” group 75.06 93.09 58.13 94.72 - Distinction 70.29 92.08 54.85 94.30 - Weighted pooling 72.57 92.76 57.72 95.33 - Overall weighting 74.28 93.16 59.55 94.92 Table 8: An ablation study of the proposed model. which means it can complement the information obtained from the pre-trained model. 4.6 Ablation Study To investigate the contribution of each component of our method, we conducted ablation experiments on all four datasets, as shown in table 8. (1) In Lattice-LSTM, each character receives word information only from the words that begin or end with it. Thus, the information of the words that contain the character inside is ignored. However, the SoftLexicon prevents the loss of this information by incorporating the “Middle” group of words. In the “ - ‘M’ group” experiment, we removed the ”Middle” group in SoftLexicon, as in Lattice-LSTM. The degradation in performance on all four datasets indicates the importance of the “M” group of words, and confirms the advantage of our method. (2) Our method proposed to draw a clear distinction between the four “BMES” categories of matched words. To study the relative contribution of this design, we conducted experiments to remove this distinction, i.e., we simply added up all the weighted words regardless of their categories. The decline in performance verifies the significance of a clear distinction for different matched words. (3) We proposed two strategies for pooling the four word sets in Section 3.2. In the “- Weighted pooling” experiment, the weighted pooling strategy was replaced with mean-pooling, which degrades the performance. Compared with mean-pooling, the weighting strategy not only succeeds in weighing different words by their significance, but also introduces the frequency information of each word in the statistical data, which is verified to be helpful. (4) Although existing lexicon-based methods like Lattice-LSTM also use word weighting, unlike the proposed Soft-lexion method, they fail to perform weight normalization among all the matched words. For example, Lattice-LSTM only normalizes the weights inside the “B” group or the ”E” group. In the “- Overall weighting” experiment, we performed weight normalization inside each 5959 “BMES” group as Lattice-LSTM does, and found the resulting performance to be degraded. This result shows that the ability to perform overall weight normalization among all matched words is also an advantage of our method. 5 Conclusion In this work, we addressed the computational efficiency of utilizing word lexicons in Chinese NER. To obtain a high-performing Chinese NER system with a fast inference speed, we proposed a novel method to incorporate the lexicon information into the character representations. Experimental studies on four benchmark Chinese NER datasets reveal that our method can achieve a much faster inference speed and better performance than the compared state-of-the-art methods. Acknowledgements The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by China National Key RD Program (No. 2018YFB1005104, 2018YFC0831105, 2017YFB1002104), National Natural Science Foundation of China (No. 61976056, 61532011, 61751201), Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01), Science and Technology Commission of Shanghai Municipality Grant (No.18DZ1201000, 16JC1420401, 17JC1420200). References Wanxiang Che, Mengqiu Wang, Christopher D Manning, and Ting Liu. 2013. Named entity recognition with bilingual constraints. In NAACL, pages 52–62. Aitao Chen, Fuchun Peng, Roy Shan, and Gordon Sun. 2006. Chinese named entity recognition with conditional probabilistic models. In SIGHAN Workshop on Chinese Language Processing. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi-pooling convolutional neural networks. In ACL—IJCNLP, volume 1, pages 167–176. Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association of Computational Linguistics, 4(1):357–370. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Dennis Diefenbach, Vanessa Lopez, Kamal Singh, and Pierre Maret. 2018. Core techniques of question answering systems over knowledge bases: a survey. KAIS, 55(3):529–569. Ruixue Ding, Pengjun Xie, Xiaoyan Zhang, Wei Lu, Linlin Li, and Luo Si. 2019. A neural multidigraph model for chinese ner with gazetteers. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1462–1467. Chuanhai Dong, Jiajun Zhang, Chengqing Zong, Masanori Hattori, and Hui Di. 2016. Characterbased lstm-crf with radical-level features for chinese named entity recognition. In Natural Language Understanding and Intelligent Applications, pages 239–250. Springer. G David Forney. 1973. The viterbi algorithm. Proceedings of the IEEE, 61(3):268–278. Tao Gui, Ruotian Ma, Qi Zhang, Lujun Zhao, Yu-Gang Jiang, and Xuanjing Huang. 2019a. Cnn-based chinese ner with lexicon rethinking. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 4982–4988. AAAI Press. Tao Gui, Yicheng Zou, Qi Zhang, Minlong Peng, Jinlan Fu, Zhongyu Wei, and Xuan-Jing Huang. 2019b. A lexicon-based graph neural network for chinese ner. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1039–1049. Hangfeng He and Xu Sun. 2017a. F-score driven max margin neural network for named entity recognition in chinese social media. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 713–718. Hangfeng He and Xu Sun. 2017b. A unified model for cross-domain and semi-supervised named entity recognition in chinese social media. In AAAI. Jingzhou He and Houfeng Wang. 2008. Chinese named entity recognition and word segmentation based on character. In Proceedings of the Sixth SIGHAN Workshop on Chinese Language Processing. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. 5960 Gina-Anne Levow. 2006. The third international chinese language processing bakeoff: Word segmentation and named entity recognition. In SIGHAN Workshop on Chinese Language Processing, pages 108–117. Haibo Li, Masato Hagiwara, Qi Li, and Heng Ji. 2014. Comparison of the impact of word segmentation on name tagging for chinese and japanese. In LREC, pages 2532–2536. Liyuan Liu, Jingbo Shang, Xiang Ren, Frank Xu, Huan Gui, Jian Peng, and Jiawei Han. 2018. Empower sequence labeling with task-aware neural language model. AAAI Conference on Artificial Intelligence. Wei Liu, Tongge Xu, Qinghua Xu, Jiayu Song, and Yueran Zu. 2019. An encoding strategy based wordcharacter lstm for chinese ner. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2379–2389. Zhangxun Liu, Conghui Zhu, and Tiejun Zhao. 2010. Chinese named entity recognition with a sequence labeling approach: based on characters, or based on words? In Advanced intelligent computing theories and applications. With aspects of artificial intelligence, pages 634–640. Springer. Yanan Lu, Yue Zhang, and Dong-Hong Ji. 2016. Multiprototype chinese character embedding. In LREC. Nanyun Peng and Mark Dredze. 2015. Named entity recognition for chinese social media with jointly trained embeddings. In EMNLP. Nanyun Peng and Mark Dredze. 2016. Improving named entity recognition for chinese social media with word segmentation representation learning. In ACL, page 149. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 74–84. Dianbo Sui, Yubo Chen, Kang Liu, Jun Zhao, and Shengping Liu. 2019. Leverage lexical knowledge for chinese named entity recognition via collaborative graph network. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3821–3831. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Mengqiu Wang, Wanxiang Che, and Christopher D Manning. 2013. Effective bilingual constraints for semi-supervised learning of named entity recognizers. In AAAI. Ralph Weischedel, Sameer Pradhan, Lance Ramshaw, Martha Palmer, Nianwen Xue, Mitchell Marcus, Ann Taylor, Craig Greenberg, Eduard Hovy, Robert Belvin, et al. 2011. Ontonotes release 4.0. LDC2011T03, Philadelphia, Penn.: Linguistic Data Consortium. Jie Yang, Zhiyang Teng, Meishan Zhang, and Yue Zhang. 2016. Combining discrete and neural features for sequence labeling. In CICLing. Springer. Suxiang Zhang, Ying Qin, Juan Wen, and Xiaojie Wang. 2006. Word segmentation and named entity recognition for sighan bakeoff3. In SIGHAN Workshop on Chinese Language Processing, pages 158–161. Yue Zhang and Jie Yang. 2018. Chinese ner using lattice lstm. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), 1554-1564. Hai Zhao and Chunyu Kit. 2008. Unsupervised segmentation helps supervised learning of character tagging for word segmentation and named entity recognition. In Proceedings of the Sixth SIGHAN Workshop on Chinese Language Processing. Junsheng Zhou, Weiguang Qu, and Fen Zhang. 2013. Chinese named entity recognition via joint identification and categorization. Chinese journal of electronics, 22(2):225–230.
2020
528
AdvAug: Robust Adversarial Augmentation for Neural Machine Translation Yong Cheng, Lu Jiang, Wolfgang Macherey and Jacob Eisenstein Google Research {chengyong, lujiang, wmach, jeisenstein}@google.com Abstract In this paper, we propose a new adversarial augmentation method for Neural Machine Translation (NMT). The main idea is to minimize the vicinal risk over virtual sentences sampled from two vicinity distributions, of which the crucial one is a novel vicinity distribution for adversarial sentences that describes a smooth interpolated embedding space centered around observed training sentence pairs. We then discuss our approach, AdvAug, to train NMT models using the embeddings of virtual sentences in sequence-tosequence learning. Experiments on ChineseEnglish, English-French, and English-German translation benchmarks show that AdvAug achieves significant improvements over the Transformer (up to 4.9 BLEU points), and substantially outperforms other data augmentation techniques (e.g. back-translation) without using extra corpora. 1 Introduction Recent work in neural machine translation (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017) has led to dramatic improvements in both research and commercial systems (Wu et al., 2016). However, a key weakness of contemporary systems is that performance can drop dramatically when they are exposed to input perturbations (Belinkov and Bisk, 2018; Cheng et al., 2019), even when these perturbations are not strong enough to alter the meaning of the input sentence. Consider a Chinese sentence, “zhejia feiji meiyou zhuangshang zhujia huo yiyuan, shizai shi qiji”. If we change the word “huo (或)” to its synonym“ji (及)”, the Transformer model will generate contradictory results of “It was indeed a miracle that the plane did not touch down at home or hospital.” versus “It was a miracle that the plane landed at home and hospital.” Such perturbations can readily be found in many public benchmarks and real-world applications. This lack of stability not only lowers translation quality but also inhibits applications in more sensitive scenarios. At the root of this problem are two interrelated issues: first, machine translation training sets are insufficiently diverse, and second, NMT architectures are powerful enough to overfit — and, in extreme cases, memorize — the observed training examples, without learning to generalize to unseen perturbed examples. One potential solution is data augmentation which introduces noise to make the NMT model training more robust. In general, two types of noise can be distinguished: (1) continuous noise which is modeled as a realvalued vector applied to word embeddings (Miyato et al., 2016, 2017; Cheng et al., 2018; Sato et al., 2019), and (2) discrete noise which adds, deletes, and/or replaces characters or words in the observed sentences (Belinkov and Bisk, 2018; Sperber et al., 2017; Ebrahimi et al., 2018; Michel et al., 2019; Cheng et al., 2019; Karpukhin et al., 2019). In both cases, the challenge is to ensure that the noisy examples are still semantically valid translation pairs. In the case of continuous noise, it only ensures that the noise vector lies within an L2-norm ball but does not guarantee to maintain semantics. While constructing semantics-preserving continuous noise in a high-dimensional space proves to be non-trivial, state-of-the-art NMT models are currently based on adversarial examples of discrete noise. For instance, Cheng et al. (2019) generate adversarial sentences using discrete word replacements in both the source and target, guided by the NMT loss. This approach achieves significant improvements over the Transformer on several standard NMT benchmarks. Despite this promising result, we find that the generated adversarial sentences are unnatural, and, as we will show, suboptimal for learning robust NMT models. In this paper, we propose AdvAug, a new adversarial augmentation technique for sequence-tosequence learning. We introduce a novel vicinity distribution to describe the space of adversarial examples centered around each training example. Unlike prior work (Cheng et al., 2019), we first generate adversarial sentences in the discrete data space and then sample virtual adversarial sentences from the vicinity distribution according to their interpolated embeddings. Our intuition is that the introduced vicinity distribution may increase the sample diversity for adversarial sentences. Our idea is partially inspired by mixup (Zhang et al., 2018), a technique for data augmentation in computer vision, and we also use a similar vicinity distribution as in mixup to augment the authentic training data. Our AdvAug approach finally trains on the embeddings sampled from the above two vicinity distributions. As a result, we augment the training using virtual sentences in the feature space as opposed to in the data space. The novelty of our paper is the new vicinity distribution for adversarial examples and the augmentation algorithm for sequence-to-sequence learning. Extensive experimental results on three translation benchmarks (NIST Chinese-English, IWSLT English-French, and WMT English-German) show that our approach achieves significant improvements of up to 4.9 BLEU points over the Transformer (Vaswani et al., 2017), outperforming the former state-of-the-art in adversarial learning (Cheng et al., 2019) by up to 3.3 BLEU points. When compared with widely-used data augmentation methods (Sennrich et al., 2016a; Edunov et al., 2018), we find that our approach yields better performance even without using extra corpora. We conduct ablation studies to gain further insights into which parts of our approach matter most. In summary, our contributions are as follows: 1. We propose to sample adversarial examples from a new vicinity distribution and utilize their embeddings, instead of their data points, to augment the model training. 2. We design an effective augmentation algorithm for learning sequence-to-sequence NMT models via mini-batches. 3. Our approach achieves significant improvements over the Transformer and prior stateof-the-art models on three translation benchmarks. 2 Background Neural Machine Translation. Generally, NMT (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017) models the translation probability P(y|x; θ) based on the encoder-decoder paradigm where x is a source-language sentence, y is a target-language sentence, and θ is a set of model parameters. The decoder in the NMT model acts as a conditional language model that operates on a shifted copy of y, i.e., ⟨sos⟩, y0, ..., y|y|−1 where ⟨sos⟩is a start symbol of a sentence and representations of x learned by the encoder. For clarity, we use e(x) ∈Rd×|x| to denote the feature vectors (or word embeddings) of the sentence x where d is dimension size. Given a parallel training corpus S, the standard training objective for NMT is to minimize the empirical risk: Lclean(θ) = E Pδ(x,y) [ℓ(f(e(x), e(y); θ), ¨y)], (1) where f(e(x), e(y); θ) is a sequence of model predictions fj(e(x), e(y); θ) = P(y|y<j, x; θ) at position j, and ¨y is a sequence of one-hot label vectors for y (with label smoothing in the Transformer). ℓis the cross entropy loss. The expectation of the loss function is summed over the empirical distribution Pδ(x, y) of the training corpus: Pδ(x, y) = 1 |S| X (x′,y′)∈S δ(x = x′, y = y′), (2) where δ denotes the Dirac delta function. Generating Adversarial Examples for NMT. To improve NMT’s robustness to small perturbations in the input sentences, Cheng et al. (2019) incorporate adversarial examples into the NMT model training. These adversarial sentences x′ are generated by applying small perturbations that are jointly learned together with the NMT model: ˆx = argmax ˆx:R(ˆx,x)≤ϵ ℓ(f(e(ˆx), e(y); θ), ¨y), (3) where R(ˆx, x) captures the degree of semantic similarity and ϵ is an upper bound on the semantic distance between the adversarial example and the original example. Ideally, the adversarial sentences convey only barely perceptible differences to the original input sentence yet result in dramatic distortions of the model output. Cheng et al. (2019) propose the AdvGen algorithm, which greedily replaces words with their top k most probable alternatives, using the gradients of their word embeddings. Adversarial examples are designed to both attack and defend the NMT model. On the encoder side, an adversarial sentence ˆx is constructed from the original input x to attack the NMT model. To defend against adversarial perturbations in the source input ˆx, they use the AdvGen algorithm to find an adversarial target input ˆy from the decoder input y. For notational convenience, let π denote this algorithm, the adversarial example ˆs is stochastically induced by π as ˆs ←π(s; x, y, ξ) where ξ is the set of parameters used in π including the NMT model parameters θ. For a detailed definition of ξ, we refer to (Cheng et al., 2019). Hence, the set of adversarial examples originating from (x, y) ∈S, namely A(x,y), can be written as: A(x,y) = {(ˆx, ˆy)|ˆx ←π(x; x, y, ξsrc), ˆy ←π(y; ˆx, y, ξtgt)}, (4) where ξsrc and ξtgt are separate parameters for generating ˆx and ˆy, respectively. Finally, the robustness loss Lrobust is computed on A(x,y) with the loss ℓ(f(e(ˆx), e(ˆy); θ), ¨y), and is used together with Lclean to train the NMT model. Data Mixup. In image classification, the mixup data augmentation technique involves training on linear interpolations of randomly sampled pairs of examples (Zhang et al., 2018). Given a pair of images (x′, y′) and (x′′, y′′), where x′ denotes the RGB pixels of the input image and y′ is its one-hot label, mixup minimizes the sample loss from a vicinity distribution (Chapelle et al., 2001) Pv(˜x, ˜y) defined in the RGB-pixel (label) space: ˜x = λx′ + (1 −λ)x′′, (5) ˜y = λy′ + (1 −λ)y′′. (6) λ is drawn from a Beta distribution Beta(α, α) controlled by the hyperparameter α. When α →0, (˜x, ˜y) is close to any one of the images (x′, y′) and (x′′, y′′). Conversely, (˜x, ˜y) approaches the middle interpolation point between them when α →+∞. The neural networks g parameterized by ψ can be trained over the mixed images (˜x, ˜y) with the loss function Lmixup(θ) = ℓ(g(˜x; ψ), ˜y). In practice, the image pair is randomly sampled from the same mini-batch. observed sentence pairs adversarial sentence pairs interpolated sentence examples sampled from Paut interpolated sentence examples sampled from Padv x: ᬯӻమဩஉঅ,य़ਹ᮷ࡅ̶ཻ y: This idea is really goodeveryone likes it. x: ᬯӻమဩஉӧᲙ,य़ਹ᮷ࡅ̶ཻ y: This idea is not goodanyone loves it. ^ ^ ^ Figure 1: Illustration of training examples sampled from vicinity distributions of Padv and Paut. Solid circles are observed sentences in the training corpus S. Solid triangles are adversarial sentences generated by replacing words in their corresponding observed sentences. Dashed points are virtual sentences obtained by interpolating the embeddings of the solid points. The dashed triangles define the data space of adversarial examples from Padv. The circles (solid and dashed) constitute Paut. 3 Approach: AdvAug In our approach AdvAug, the goal is to reinforce the model over virtual data points surrounding the observed examples in the training set. We approximate the density of P(x, y) in the vicinities of the generated adversarial examples and observed training examples. To be specific, we design two vicinity distributions (Chapelle et al., 2001) to estimate the joint distribution of P(x, y): Padv for the (dynamically generated) adversarial examples and Paut for the (observed) authentic examples in S. Given the training set S, we have: Padv(˜x, ˜y) = 1 |S| X (x,y)∈S µadv(˜x, ˜y|A(x,y)), (7) Paut(˜x, ˜y) = 1 |S| X (x,y)∈S µaut(˜x, ˜y|x, y), (8) where A(x,y) is the set of adversarial examples originated from (x, y) defined in Eq. (4). We will discuss µadv and µaut in detail which define the probability functions, but first we give some highlevel descriptions: • Padv is a new vicinity distribution for virtual adversarial sentences of the same origin. It captures the intuition that the convex combination of adversarial sentences should have the same translation. It is the most important factor for improving the translation quality in our experiments. • Paut is a distribution to improve the NMT’s robustness by “mixing up” observed sentences of different origins. This distribution is similar to mixup, but it is defined over linear interpolations of the sequence of word embeddings of the source and target sentences. Although Paut by itself yields marginal improvements, we find it is complementary to Padv. We train the NMT model on two vicinity distributions Padv and Paut. Figure 1 illustrates examples sampled from them. As shown, a solid circle stands for an observed training example (i.e. a sentencepair) in S and a solid triangle denotes an adversarial example in A(x,y). For Padv, we construct virtual adversarial examples (dashed triangles) to amend the sample diversity by interpolating the word embeddings of solid triangles. Likewise, we interpolate the word embeddings of solid circles to model Paut for the (observed) authentic examples. This results in the dashed circles in Figure 1. Unlike prior works on vicinal risk minimization (Chapelle et al., 2001; Zhang et al., 2018), we do not directly observe the virtual sentences in Padv or Paut. This also distinguishes us from Cheng et al. (2019), who generate actual adversarial sentences in the discrete word space. In the remainder of this section, we will discuss the definition of Padv and Paut and how to optimize the translation loss over virtual sentences via mini-batch training. 3.1 Padv for Adversarial Data To compute µadv, we employ π similar as in (Cheng et al., 2019) to generate an adversarial example set A(x,y) from each instance (x, y) ∈S (see Eq. (4)). Let (x′, y′) and (x′′, y′′) be two examples randomly sampled from A(x,y). We align the two sequences by padding tokens to the end of the shorter sentence. Note that this operation aims for a general case (particularly for Paut) although the lengths of y′ and y′′ in A(x,y) are same. To obtain e(˜x) = [e(˜x1), . . . , e(˜x|˜x|)], we apply the convex combination mλ(x′, x′′) over the aligned word embeddings, which is: e(˜xi)=λe(x′ i) + (1 −λ)e(x′′ i ), ∀i ∈[1, |˜x|], (9) where λ ∼Beta(α, α). We use mλ(·, ·) for the interpolation. Similarly, e(˜y) can also be obtained with mλ(y′, y′′). All adversarial examples in A(x,y) are supposed to be translated into the same target sentence, and the convex combination still lies in space of the adversarial search ball defined in Eq. (3). As a result, all virtual sentence pairs (˜x, ˜y) ∈A(x,y) of the same origin can be fed into NMT models as source and target inputs which share the same soft target label for (x, y). µadv in Padv can be calculated from: µadv(˜x, ˜y|A(x,y)) = 1 |A(x,y)|2 X (x′,y′)∈A(x,y) X (x′′,y′′)∈A(x,y) E λ[δ(e(˜x) = mλ(x′, x′′), e(˜y) = mλ(y′, y′′)]. (10) Hence, the translation loss on vicinal adversarial examples Ladv(θ) can be integrated over Padv as: Ladv(θ)= E Padv(˜x,˜y) [ℓ(f(e(˜x), e(˜y); θ), ω)], (11) where ω is a sequence of output distributions (denoted as a sequence of label vectors, e.g. ¨y) as the soft target for the sentence y. We employ two useful techniques in computing the loss Ladv in Eq. (11). First, we minimize the KL-divergence between the model predictions at the word level: |y| X j=1 DKL(fj(e(x), e(y); ˆθ)||fj(e(˜x), e(˜y); θ)), (12) where ˆθ means a fixed copy of the current parameter set and no gradients are back-propagated through it. Removing constant values from Eq. (12) yields an equivalent solution of: ℓ(f(e(˜x), e(˜y); θ), ω) =ℓ(f(e(˜x), e(˜y); θ), f(e(x), e(y); ˆθ)). (13) Eq. (13) indicates that f(e(x), e(y); ˆθ) can be used as the soft target ω in Eq. (11) for virtual adversarial example (˜x, ˜y). KL-divergence enforces the model on virtual adversarial examples to indirectly learn from the soft target of the observed examples over large vocabularies. This justifies the use of ω in Eq. (11) and turns out to be more effective than directly learning from the ground-truth label. Besides, Eq. (11) needs to enumerate numerous pairs of adversarial examples in A(x,y) while in practice we only sample a pair at a time inside each mini-batch for training efficiency. We hence employ curriculum learning to do the importance sampling. To do so, we re-normalize the translation loss and employ a curriculum from (Jiang et al., 2018) to encourage the model to focus on the difficult training examples. Formally, for a mini-batch of the training losses L = {ℓi}m i=1, we re-weigh the batch loss using: L = 1 Pm i=1 I(ℓi > η) m X i=1 I(ℓi > η)ℓi, (14) where I(·) is an indicator function and η is set by a moving average tracking the p-th percentile of the example losses of every mini-batch. In our experiments, we set the p-th percentile to be 100×(1−rt) for the training iteration t, and gradually anneal rt using rt = 0.5t/β, where β is the hyperparameter. 3.2 Paut for Authentic Data We define the µaut in the vicinity distribution Paut for authentic examples as follows: µaut(˜x, ˜y|x, y) = 1 |S| X (x′,y′)∈S E λ[ δ(e(˜x) = mλ(x, x′), e(˜y) = mλ(y, y′), ˜ω = mλ(ω, ω′))]. (15) The translation loss on authentic data is integrated over all examples of the vicinity distribution Paut: Laut(θ) = E Paut(˜x,˜y) [ℓ(f(e(˜x), e(˜y); θ), ˜ω)]. (16) In our experiments, we select the value of λ in Eq. (15) twice for every (x, y): (1) a constant 1.0 and (2) a sample from the Beta distribution. The former is equivalent to sampling from the empirical distribution Pδ whereas the latter is similar to applying mixup in the embedding space of the sequence model. In other words, Laut(θ) equals the sum of two translation losses, Lclean(θ) computed on the original training examples when λ is 1.0 and Lmixup(θ) computed on virtual examples when λ is sampled from a Beta distribution. Accordingly, when λ is 1.0 we set ˜ω to be the interpolation of the sequences of one-hot label vectors for y and y′, i.e. ω = ¨y and ω′ = ¨y′. Otherwise ˜ω is the interpolation of model output vectors of (x, y) and (x′, y′), that is, ω = f(e(x), e(y); ˆθ) and ω′ = f(e(x′), e(y′); ˆθ). Algorithm 1: Proposed AdvAug function. Input: A batch of source and target sentences (X, Y); the selection ratio rt; the hyperparameter α. Output: Mini-batch losses Ladv and Laut 1 Function AdvAug(X, Y): 2 foreach (x, y) ∈(X, Y) do 3 ω ←f(e(x), e(y); ˆθ); 4 Sample two adversarial examples (x′, y′) and (x′′, y′′) from A(x,y) by Eq. (4) ; 5 λ ←Beta(α, α) ; 6 e(˜x) ←mλ(x′, x′′), e(˜y) ←mλ(y′, y′′); 7 ℓi ←ℓ(f(e(˜x), e(˜y); θ), ω) ; 8 end 9 Compute Ladv using rt and {ℓi} by Eq. (14) ; 10 (X′, Y′) ←Shuffle (X, Y) ; 11 foreach (x, y, x′, y′) ∈(X, Y, X′, Y′) do 12 ω ←f(e(x), e(y); ˆθ); 13 ω′ ←f(e(x′), e(y′); ˆθ); 14 λ ←Beta(α, α) ; 15 e(˜x) ←mλ(x, x′), e(˜y) ←mλ(y, y′) ; 16 ˜ω ←mλ(ω, ω′) ; 17 ℓi ←ℓ(f(e(˜x), e(˜y); θ), ˜ω) + ℓ(f(e(x), e(y); θ), ¨y) ; 18 end 19 Compute Laut by averaging {ℓi} ; 20 return Ladv, Laut 3.3 Training Finally, the training objective in our AdvAug is a combination of the two losses: θ∗= argmin θ {Laut(θ) + Ladv(θ)}. (17) Here, we omit two bidirectional language model losses for simplicity, which are used to recommend word candidates to maintain semantic similarities (Cheng et al., 2019). In practice, we need to compute the loss via minibatch training. For the Paut, we follow the pair sampling inside each mini-batch in mixup. It can avoid padding too much tokens because sentences of similar lengths are grouped within a mini-batch (Vaswani et al., 2017). For the Padv, we sample a pair of examples from A(x,y) for each (x, y) and cover the distribution over multiple training epochs. The entire procedure to calculate the translation losses, Ladv(θ) and Laut(θ), is presented in Algorithm 1. In a nutshell, for each batch of training examples, we firstly sample virtual examples from Padv and Paut by interpolating the embeddings of the adversarial or authentic training examples. Then we calculate the translation loss using their interpolated embeddings. Method Loss Config. MT06 MT02 MT03 MT04 MT05 MT08 Vaswani et al. (2017) Lclean 44.57 45.49 44.55 46.20 44.96 35.11 Miyato et al. (2017) 45.28 45.95 44.68 45.99 45.32 35.84 Sato et al. (2019) 45.75 46.37 45.02 46.49 45.88 35.90 Cheng et al. (2019) 46.95 47.06 46.48 47.39 46.58 37.38 Sennrich et al. (2016a)* 46.39 47.31 47.10 47.81 45.69 36.43 Edunov et al. (2018)* 46.20 47.78 46.93 47.80 46.81 36.79 Ours Lmixup 45.12 46.32 44.81 46.61 46.08 36.00 Laut 46.73 46.79 46.13 47.54 46.88 37.21 Lclean + Ladv 47.89 48.53 48.73 48.60 48.76 39.03 Laut + Ladv 49.26 49.03 47.96 48.86 49.88 39.63 Ours* Laut + Ladv 49.98 50.34 49.81 50.61 50.72 40.45 Table 1: Baseline comparison on NIST Chinese-English translation. * indicates the model uses extra corpora and means not elaborating on its training loss. 4 Experiments 4.1 Setup We verify our approach on translation tasks for three language pairs: Chinese-English, EnglishFrench, and English-German. The performance is evaluated with the 4-gram BLEU score (Papineni et al., 2002) calculated by the multi-bleu.perl script. We report case-sensitive tokenized BLEU scores for English-French and English-German, and caseinsensitive tokenized BLEU scores for ChineseEnglish. Note that all reported BLEU scores in our approach are from a single model rather than averaging multiple models (Vaswani et al., 2017). For the Chinese-English translation task, the training set is the LDC corpus consisting of 1.2M sentence pairs. The NIST 2006 dataset is used as the validation set, and NIST 02, 03, 04, 05, 08 are used as the test sets. We apply byte-pair encoding (BPE) (Sennrich et al., 2016b) with 60K merge operations to build two vocabularies comprising 46K Chinese sub-words and 30K English sub-words. We use the IWSLT 2016 corpus for English-French translation. The training corpus with 0.23M sentence pairs is preprocessed with the BPE script with 20K joint operations. The validation set is test2012 and the test sets are test2013 and test2014. For English-German translation, we use the WMT14 corpus consisting of 4.5M sentence pairs. The validation set is newstest2013 whereas the test set is newstest2014. We build a shared vocabulary of 32K sub-words using the BPE script. We implement our approach on top of the Transformer (Vaswani et al., 2017). The size of the hidden unit is 512 and the other hyperparameters are set following their default settings. There are three important hyperparameters in our approach, α in the Beta distribution and the word replacement ratio of γsrc ∈ξsrc, and γtgt ∈ξtgt detailed in Eq. (4). Note that γsrc and γtgt are not new hyperparameters but inherited from (Cheng et al., 2019). We tune these hyperameters on the validation set via a grid search, i.e. α ∈ {0.2, 0.4, 4, 8, 32}, γsrc ∈{0.10, 0.15, 0.25} and γtgt ∈{0.10, 0.15, 0.30, 0.5}. For the mixup loss Lmixup, α is fixed to 0.2. For the loss Laut and Ladv, the optimal value of α is 8.0. The optimal values of (γsrc, γtgt) are found to be (0.25, 0.50), (0.15, 0.30) and (0.15, 0.15) for Chinese-English, English-French and English-German, respectively, while it is set to (0.10, 0.10) only for backtranslated sentence pairs. β in Eq. (14) is set to 250K, 100K, 1M for Chinese-English, EnglishFrench and English-German. Unlike Cheng et al. (2019), we remove the learning of target language models to speed up the training. For each training batch, we introduce a batch of augmented adversarial examples and a batch of augmented authentic examples, which costs twice the vanilla training. For constructing adversarial examples, we solely compute the gradients for word embeddings which takes little time. After summing up the time of all steps, our total training time is about 3.3 times the vanilla training. 4.2 Main Results Chinese-English Translation. Table 1 shows results on the Chinese-English translation task, in comparison with the following six baseline methods. For a fair comparison, we implement all these Method Loss Config. English-French English-German test2013 test2014 newstest13 newstest14 Vaswani et al. (2017) Lclean 40.78 37.57 25.80 27.30 Sato et al. (2019) − 41.67 38.72 25.97 27.46 Cheng et al. (2019) − 41.76 39.46 26.34 28.34 Ours Lmixup 40.78 38.11 26.28 28.08 Laut 41.49 38.74 26.33 28.58 Laut + Ladv 43.03 40.91 27.20 29.57 Table 2: Results on IWSLT16 English-French and WMT14 English-German translation. methods using the Transformer backbone or report results from those papers on the same corpora. 1. The seminal Transformer model for NMT (Vaswani et al., 2017). 2. Following Miyato et al. (2017), we use adversarial learning to add continuous gradient-based perturbations to source word embeddings and extend it to the Transformer model. 3. Sato et al. (2019) leverage Miyato et al. (2017)’s idea into NMT by incorporating gradient-based perturbations to both source and target word embeddings and optimize the model with adversarial training. 4. Cheng et al. (2019) generate discrete adversarial examples guided by the gradients of word embeddings. Adversarial examples are used to both attack and defend the NMT model. 5. Sennrich et al. (2016a) translate monolingual corpora using an inverse NMT model and then augment the training data with them. 6. Based on Sennrich et al. (2016a), Edunov et al. (2018) propose three improved methods to generate back-translated data, which are sampling, top10 and beam+noise. Among those, we choose beam+noise as our baseline method, which can be regarded as an approach to incorporating noise into data. We first verify the importance of different translation losses in our approach. We find that both Laut and Ladv are useful in improving the Transformer model. Ladv is more important and yields a significant improvement when combined with the standard empirical loss Lclean (cf. Eq. (1)). These results validate the effectiveness of augmenting with virtual adversarial examples. When we use both Laut and Ladv to train the model, we obtain the best performance (up to 4.92 BLEU points on MT05). We also compare with the mixup loss. However, Lmixup is only slightly better than the standard empirical loss Lclean. Compared with the baseline methods without using extra corpora, our approach shows significant improvements over the state-of-the-art models. In particular, the superiority of Lclean + Ladv over both Cheng et al. (2019) and Sato et al. (2019) verifies that we propose a more effective method to address adversarial examples in NMT. We also directly incorporate two adversarial examples to NMT models without interpolating their embeddings, but we do not observe any further gain over Cheng et al. (2019). This substantiates the superior performance of our approach on the standard data sets. To compare with the approaches using extra monolingual corpora, we sample 1.25M English sentences from the Xinhua portion of the GIGAWORD corpus and list our performance in the last row of Table 1. When the back-translated corpus is incorporated, our approach yields further improvements, suggesting our approach complements the back-translation approaches. English-French and English-German Translation. Table 2 shows the comparison with the Transformer model (Vaswani et al., 2017), Sato et al. (2019) and Cheng et al. (2019) on English-French and English-German translation tasks. Our approach consistently outperforms all three baseline methods, yielding significant 3.34 and 2.27 BLEU point gains over the Transformer on the EnglishFrench and English-German translation tasks, respectively. We also conduct similar ablation studies on the translation loss. We still find that the combination of Ladv abd Laut performs the best, which is consistent with the findings in the Chinese-English translation task. The substantial gains on these two translation tasks suggest the potential applicability of our approach to more language pairs. Input 但(但是)协议执行过程一波三折,致使和平进程一再受挫 Reference however, implementation of the deals has witnessed ups and downs, resulting in continuous setbacks in the peace process Vaswani et al. however, the process of implementing the agreement was full of twists and on Input turns, with the result that the peace process suffered setbacks again and again. on Noisy Input the process of the agreement has caused repeated setbacks to the peace process. Ours however, the process of implementing the agreement experienced twists and on Input turns, resulting in repeated setbacks in the peace process. on Noisy Input however, the process of implementing the agreement experienced twists and turns, resulting in repeated setbacks in the peace process. Table 3: Translation Examples of Transformer and our model for an input and its adversarial input. Loss α = 0.2 0.4 4 8 32 Lmixup 45.28 45.38 45.64 45.09 Laut 45.95 45.92 46.70 46.73 46.54 Lclean+Ladv 47.06 46.88 47.60 47.89 47.81 Table 4: Effect of α on the Chinese-English validation set. “-” indicates that the model fails to converge. Method 0.00 0.05 0.10 0.15 Vaswani et al. 44.59 41.54 38.84 35.71 Miyato et al. 45.11 42.11 39.39 36.44 Sato et al. 45.75 44.04 41.25 38.78 Cheng et al. 46.95 44.20 41.71 39.89 Ours 49.26 47.53 44.71 41.76 Table 5: Results on artificial noisy inputs. The column lists results for different noise fractions. 4.3 Effect of α The hyperparameter α controls the shape of the Beta distribution over interpolation weights. We study its effect on the validation set in Table 4. Notable differences occur when α < 1 and α > 1, this is because the Beta distribution show two different shapes with α = 1 as a critical point. As we see, both Laut and Ladv prefer a large α and perform better when α = 8. Recall that when α is large, mλ behaves similarly to a simple average function. In Lmixup, α = 4 performs slightly better, and a large α = 32 will fail the model training. Although the result with α = 4 appears to be slightly better, it consumes more iterations to train the model to reach the convergence, i.e. , 90K for α = 4 vs. 20K for α = 0.2. These indicate the differences between the proposed vicinity distributions and the one used in mixup. 1.0x104 2.0x104 3.0x104 4.0x104 5.0x104 Iterations 36 38 40 42 44 46 48 BLEU Lclean Lmixup Laut Lclean + Ladv Laut + Ladv Figure 2: BLEU scores over iterations on the ChineseEnglish validation set. 4.4 Robustness to Noisy Inputs and Overfitting To test robustness on noisy inputs, we follow Cheng et al. (2019) to construct a noisy data set by randomly replacing a word in each sentence of the standard validation set with a relevant alternative. The relevance between words is measured by the similarity of word embeddings. 100 noisy sentences are generated for each of them and then re-scored to pick the best one with a bidirectional language model. Table 5 shows the results on artificial noisy inputs with different noise levels. Our approach shows higher robustness over all baseline methods across all noise levels. Figure 2 shows the evolution of BLEU scores during training. For Lclean, the BLEU score reaches its peak at about 20K iterations, and then the model starts overfitting. In comparison, all of the training losses proposed in this paper are capable of resisting overfitting: in fact, even after 100K iterations, no significant regression is observed (not shown in this figure). At the same iteration, our results are consistently higher than both the empirical risk (Lclean) and mixup (Lmixup). As shown in Table 3, the baseline yields an incorrect translation possibly because the word “danshi(但是)” seldom occurs in this context in our training data. In contrast, our model incorporates embeddings of virtual sentences that contain “danshi(但是)” or its synonym “dan(但)”. This encourages our model to learn to push their embeddings closer during training, and make our model more robust to small perturbations in real sentences. 5 Related Work Data Augmentation. Data augmentation is an effective method to improve machine translation performance. Existing methods in NMT may be divided into two categories, based upon extra corpora (Sennrich et al., 2016a; Cheng et al., 2016; Zhang and Zong, 2016; Edunov et al., 2018) or original parallel corpora (Fadaee et al., 2017; Wang et al., 2018; Cheng et al., 2019). Recently, mixup (Zhang et al., 2018) has become a popular data augmentation technique for semi-supervised learning (Berthelot et al., 2019) and overcoming real-world noisy data (Jiang et al., 2019). Unlike prior works, we introduce a new method to augment the representations of the adversarial examples in sequence-tosequence training of the NMT model. Even without extra monolingual corpora, our approach substantially outperforms the widely-used back-translation methods (Sennrich et al., 2016a; Edunov et al., 2018). Furthermore, we can obtain even better performance by including additional monolingual corpora. Robust Neural Machine Translation. It is well known that neural networks are sensitive to noisy inputs (Szegedy et al., 2014; Goodfellow et al., 2014), and neural machine translation is no exception. Thus improving the robustness of NMT models has become a popular research topic (e.g., Belinkov and Bisk, 2018; Sperber et al., 2017; Ebrahimi et al., 2018; Cheng et al., 2018, 2019; Karpukhin et al., 2019; Li et al., 2019). Many of these studies focus on augmenting the training data to improve robustness, especially with adversarial examples (Ebrahimi et al., 2018; Cheng et al., 2019; Karpukhin et al., 2019; Michel et al., 2019). Others also tried to deal with this issue by finding better input representations (Durrani et al., 2019), adding adversarial regularization (Sato et al., 2019) and so on. In contrast to those studies, we propose the vicinity distribution defined in a smooth space by interpolating discrete adversarial examples. Experimental results show substantial improvements on both clean and noisy inputs. 6 Conclusion We have presented an approach to augment the training data of NMT models by introducing a new vicinity distribution defined over the interpolated embeddings of adversarial examples. To further improve the translation quality, we also incorporate an existing vicinity distribution, similar to mixup for observed examples in the training set. We design an augmentation algorithm over the virtual sentences sampled from both of the vicinity distributions in sequence-to-sequence NMT model training. Experimental results on Chinese-English, English-French and English-German translation tasks demonstrate the capability of our approach to improving both translation performance and robustness. References Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations. Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In International Conference on Learning Representations. David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. 2019. Mixmatch: A holistic approach to semi-supervised learning. arXiv preprint arXiv:1905.02249. Olivier Chapelle, Jason Weston, L´eon Bottou, and Vladimir Vapnik. 2001. Vicinal risk minimization. In Advances in neural information processing systems, pages 416–422. Yong Cheng, Lu Jiang, and Wolfgang Macherey. 2019. Robust neural machine translation with doubly adversarial inputs. In Association for Computational Linguistics. Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. 2018. Towards robust neural machine translation. In Association for Computational Linguistics. Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semisupervised learning for neural machine translation. In Association for Computational Linguistics. Nadir Durrani, Fahim Dalvi, Hassan Sajjad, Yonatan Belinkov, and Preslav Nakov. 2019. One size does not fit all: Comparing nmt representations of different granularities. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Javid Ebrahimi, Daniel Lowd, and Dejing Dou. 2018. On adversarial examples for character-level neural machine translation. In Proceedings of COLING. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Empirical Methods in Natural Language Processing. Marzieh Fadaee, Arianna Bisazza, and Christof Monz. 2017. Data augmentation for low-resource neural machine translation. In Association for Computational Linguistics. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In International Conference on Machine Learning. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. Lu Jiang, Di Huang, and Weilong Yang. 2019. Synthetic vs real: Deep learning on controlled noise. arXiv preprint arXiv:1911.09781. Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. 2018. Mentornet: Learning datadriven curriculum for very deep neural networks on corrupted labels. In International Conference on Machine Learning. Vladimir Karpukhin, Omer Levy, Jacob Eisenstein, and Marjan Ghazvininejad. 2019. Training on synthetic noise improves robustness to natural noise in machine translation. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019). Xian Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, and Hassan Sajjad. 2019. Findings of the first shared task on machine translation robustness. arXiv preprint arXiv:1906.11943. Paul Michel, Xian Li, Graham Neubig, and Juan Pino. 2019. On evaluation of adversarial perturbations for sequence-to-sequence models. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Takeru Miyato, Andrew M Dai, and Ian Goodfellow. 2017. Adversarial training methods for semisupervised text classification. In International Conference on Learning Representations. Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. 2016. Distributional smoothing with virtual adversarial training. In International Conference on Learning Representations. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a methof for automatic evaluation of machine translation. In Association for Computational Linguistics. Motoki Sato, Jun Suzuki, and Shun Kiyono. 2019. Effective adversarial regularization for neural machine translation. In Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Association for Computational Linguistics. Matthias Sperber, Jan Niehues, and Alex Waibel. 2017. Toward robust neural machine translation for noisy input sequences. In International Workshop on Spoken Language Translation. Christian Szegedy, Wojciech Zaremba, Sutskever Ilya, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Machine Learning. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. Xinyi Wang, Hieu Pham, Zihang Dai, and Graham Neubig. 2018. Switchout: an efficient data augmentation algorithm for neural machine translation. In Empirical Methods in Natural Language Processing. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations. Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In Empirical Methods in Natural Language Processing.
2020
529
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 567–582 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 567 Efficient Dialogue State Tracking by Selectively Overwriting Memory Sungdong Kim Sohee Yang Gyuwan Kim Sang-Woo Lee Clova AI, NAVER Corp. {sungdong.kim, sh.yang, gyuwan.kim, sang.woo.lee}@navercorp.com Abstract Recent works in dialogue state tracking (DST) focus on an open vocabulary-based setting to resolve scalability and generalization issues of the predefined ontology-based approaches. However, they are inefficient in that they predict the dialogue state at every turn from scratch. Here, we consider dialogue state as an explicit fixed-sized memory and propose a selectively overwriting mechanism for more efficient DST. This mechanism consists of two steps: (1) predicting state operation on each of the memory slots, and (2) overwriting the memory with new values, of which only a few are generated according to the predicted state operations. Our method decomposes DST into two sub-tasks and guides the decoder to focus only on one of the tasks, thus reducing the burden of the decoder. This enhances the effectiveness of training and DST performance. Our SOM-DST (Selectively Overwriting Memory for Dialogue State Tracking) model achieves state-of-theart joint goal accuracy with 51.72% in MultiWOZ 2.0 and 53.01% in MultiWOZ 2.1 in an open vocabulary-based DST setting. In addition, we analyze the accuracy gaps between the current and the ground truth-given situations and suggest that it is a promising direction to improve state operation prediction to boost the DST performance.1 1 Introduction Building robust task-oriented dialogue systems has gained increasing popularity in both the research and industry communities (Chen et al., 2017). Dialogue state tracking (DST), one of the essential tasks in task-oriented dialogue systems (Zhong et al., 2018), is keeping track of user goals or intentions throughout a dialogue in the form of a set of slot-value pairs, i.e., dialogue state. Because the 1The code is available at github.com/clovaai/som-dst. Figure 1: An example of how SOM-DST performs dialogue state tracking at a specific dialogue turn (in this case, fifth). The shaded part is the input to the model, and “Dialogue State at turn 5” at the right-bottom part is the output of the model. Here, UPDATE operation needs to be performed on the 10th and 11th slot. DST at this turn is challenging since the model requires reasoning over the long-past conversation. However, SOMDST can still robustly perform DST because the previous dialogue state is directly utilized like a memory. next dialogue system action is selected based on the current dialogue state, an accurate prediction of the dialogue state has significant importance. Traditional neural DST approaches assume that all candidate slot-value pairs are given in advance, i.e., they perform predefined ontology-based DST (Mrkˇsi´c et al., 2017; Zhong et al., 2018; Nouri and Hosseini-Asl, 2018; Lee et al., 2019). Most previous works that take this approach perform DST by scoring all possible slot-value pairs in the ontology and selecting the value with the highest score as the predicted value of a slot. Such an approach has been widely applied to datasets like DSTC2 and WOZ2.0, which have a relatively small ontology size. (Henderson et al., 2014; Wen et al., 2017) 568 Although this approach simplifies the task, it has inherent limitations: (1) it is often difficult to obtain the ontology in advance, especially in a real scenario (Xu and Hu, 2018), (2) predefined ontologybased DST cannot handle previously unseen slot values, and (3) the approach does not scale large since it has to go over all slot-value candidates at every turn to predict the current dialogue state. Indeed, recent DST datasets often have a large size of ontology; e.g., the total number of slot-value candidates in MultiWOZ 2.1 is 4510, while the numbers are much smaller in DSTC2 and WOZ2.0 as 212 and 99, respectively (Budzianowski et al., 2018). To address these issues, recent methods employ an approach that either directly generates or extracts a value from the dialogue context for every slot, allowing open vocabulary-based DST (Lei et al., 2018; Gao et al., 2019; Wu et al., 2019; Ren et al., 2019). While this formulation is relatively more scalable and robust to handling unseen slot values, many of the previous works do not efficiently perform DST since they predict the dialogue state from scratch at every dialogue turn. In this work, we focus on an open vocabularybased setting and propose SOM-DST (Selectively Overwriting Memory for Dialogue State Tracking). Regarding dialogue state as a memory that can be selectively overwritten (Figure 1), SOM-DST decomposes DST into two sub-tasks: (1) state operation prediction, which decides the types of the operations to be performed on each of the memory slots, and (2) slot value generation, which generates the values to be newly written on a subset of the memory slots (Figure 2). This decomposition allows our model to efficiently generate the values of only a minimal subset of the slots, while many of the previous works generate or extract the values of all slots at every dialogue turn. Moreover, this decomposition reduces the difficulty of DST in an open-vocabulary based setting by clearly separating the roles of the encoder and the decoder. Our encoder, i.e., state operation predictor, can focus on selecting the slots to pass to the decoder so that the decoder, i.e., slot value generator, can focus only on generating the values of those selected slots. To the best of our knowledge, our work is the first to propose such a selectively overwritable memorylike perspective and a discrete two-step approach on DST. Our proposed SOM-DST achieves state-of-theart joint goal accuracy in an open vocabulary-based DST setting on two of the most actively studied datasets: MultiWOZ 2.0 and MultiWOZ 2.1. Error analysis (Section 6.2) further reveals that improving state operation prediction can significantly boost the final DST accuracy. In summary, the contributions of our work built on top of a perspective that considers dialogue state tracking as selectively overwriting memory are as follows: • Enabling efficient DST, generating the values of a minimal subset of the slots by utilizing the previous dialogue state at each turn. • Achieving state-of-the-art performance on MultiWOZ 2.0 and MultiWOZ 2.1 in an open vocabulary-based DST setting. • Highlighting the potential of improving the state operating prediction accuracy in our proposed framework. 2 Previous Open Vocabulary-based DST Many works on recent task-oriented dialogue datasets with a large scale ontology, such as MultiWOZ 2.0 and MultiWOZ 2.1, solve DST in an open vocabulary-based setting (Gao et al., 2019; Wu et al., 2019; Ren et al., 2019; Le et al., 2020a,b). Wu et al. (2019) show the potential of applying the encoder-decoder framework (Cho et al., 2014a) to open vocabulary-based DST. However, their method is not computationally efficient because it performs autoregressive generation of the values for all slots at every dialogue turn. Ren et al. (2019) tackle the drawback of the model of Wu et al. (2019), that their model generates the values of all slots at every dialogue turn, by using a hierarchical decoder. In addition, they come up with a new notion dubbed Inference Time Complexity (ITC) to compare the efficiency of different DST models. ITC is calculated using the number of slots J and the number of corresponding slot values M.2 Following their work, we also calculate ITC in Appendix B for comparison. Le et al. (2020b) introduce another work that tackles the efficiency issue. To maximize the computational efficiency, they use a non-autoregressive decoder to generate the slot values of the current dialogue state at once. They encode the slot type information together with the dialogue context and 2The notations used in the work of Ren et al. (2019) are n and m, respectively. 569 Figure 2: The overview of the proposed SOM-DST. SOM-DST takes the previous turn dialogue utterances Dt−1, current turn dialogue utterances Dt, and the previous dialogue state Bt−1 as the input and outputs the current dialogue state Bt. This is performed by two sub-components: state operation predictor and slot value generator. State operation predictor takes Dt−1, Dt, and Bt−1 as the input and predicts the operations to perform on each of the slots. Domain classification is jointly performed as an auxiliary task. Slot value generator generates the values for the slots that take UPDATE as the predicted operation. The value generation for a slot is done in an autoregressive manner. the delexicalized dialogue context. They do not use the previous turn dialogue state as the input. Le et al. (2020a) process the dialogue context in both domain-level and slot-level. They make the final representation to generate the values using a late fusion approach. They show that there is a performance gain when the model is jointly trained with response generation. However, they still generate the values of every slot at each turn, like Wu et al. (2019). Gao et al. (2019) formulate DST as a reading comprehension task and propose a model named DST Reader that extracts the values of the slots from the input. They introduce and show the importance of the concept of a slot carryover module, i.e., a component that makes a binary decision whether to carry the value of a slot from the previous turn dialogue state over to the current turn dialogue state. The definition and use of discrete operations in our work is inspired by their work. Zhang et al. (2019) target the issue of illformatted strings that generative models suffer from. In order to avoid this issue, they take a hybrid approach. For the slots they categorize as picklistbased slots, they use a predefined ontology-based approach as in the work of Lee et al. (2019); for the slots they categorize as span-based slots, they use a span extraction-based method like DST-Reader (Gao et al., 2019). However, their hybrid model shows lower performance than when they use only the picklist-based approach. Although their solely picklist-based model achieves state-of-the-art joint accuracy in MultiWOZ 2.1, it is done in a predefined ontology-based setting, and thus cannot avoid the scalability and generalization issues of predefined ontology-based DST. 3 Selectively Overwriting Memory for Dialogue State Tracking Figure 2 illustrates the overview of SOM-DST. To describe the proposed SOM-DST, we formally define the problem setting in our work. Dialogue State We define the dialogue state at turn t, Bt = {(Sj, V j t ) | 1 ≤j ≤J}, as a fixed-sized memory whose keys are slots Sj and values are the corresponding slot value V j t , where J is the total number of such slots. Following the convention of MultiWOZ 2.0 and MultiWOZ 2.1, we use the term “slot” to refer to the concatenation of a domain name and a slot name. Special Value There are two special values NULL and DONTCARE. NULL means that no information is given about the slot up to the turn. For instance, the dialogue state before the beginning of any dialogue B0 has only NULL as the value of all slots. DONTCARE means that the slot neither needs to be tracked nor considered important in the dialogue at that time.3 Operation At every turn t, an operation rj t ∈O = {CARRYOVER, DELETE, DONTCARE, UPDATE} is chosen by the state operation predictor (Section 3Such notions of “none value” and “dontcare value” appear in the previous works as well (Wu et al., 2019; Gao et al., 2019; Le et al., 2020b; Zhang et al., 2019). 570 3.1) and performed on each slot Sj to set its current turn corresponding value V j t . When an operation is performed, it either keeps the slot value unchanged (CARRYOVER) or changes it to some value different from the previous one (DELETE, DONTCARE, and UPDATE) as the following. V j t =            V j t−1 if rj t = CARRYOVER NULL if rj t = DELETE DONTCARE if rj t = DONTCARE v if rj t = UPDATE The operations that set the value of a slot to a special value (DELETE to NULL and DONTCARE to DONTCARE, respectively) are chosen only when the previous slot value V j t−1 is not the corresponding special value. UPDATE operation requires the generation of a new value v /∈ {V j t−1, NULL, DONTCARE} by slot value generator (Section 3.2). State operation predictor performs state operation prediction as a classification task, and slot value generator performs slot value generation to find out the values of the slots on which UPDATE should be performed. The two components of SOM-DST are jointly trained to predict the current turn dialogue state. 3.1 State Operation Predictor Input Representation We denote the representation of the dialogue utterances at turn t as Dt = At ⊕; ⊕Ut ⊕[SEP], where At is the system response and Ut is the user utterance. ; is a special token used to mark the boundary between At and Ut, and [SEP] is a special token used to mark the end of a dialogue turn. We denote the representation of the dialogue state at turn t as Bt = B1 t ⊕. . . ⊕BJ t , where Bj t = [SLOT]j ⊕Sj ⊕- ⊕V j t is the representation of the j-th slot-value pair. - is a special token used to mark the boundary between a slot and a value. [SLOT]j is a special token used to aggregate the information of the j-th slot-value pair into a single vector, like the use case of [CLS] token in BERT (Devlin et al., 2019). In this work, we use the same special token [SLOT] for all [SLOT]j. Our state operation predictor employs a pretrained BERT encoder. The input tokens to the state operation predictor are the concatenation of the previous turn dialog utterances, the current turn dialog utterances, and the previous turn dialog state:4 Xt = [CLS] ⊕Dt−1 ⊕Dt ⊕Bt−1, where [CLS] is a special token added in front of every turn input. Using the previous dialogue state as the input serves as an explicit, compact, and informative representation of the dialogue history for the model. When the value of the j-th slot at time t −1, i.e., V j t−1, is NULL, we use a special token [NULL] as the input. When the value is DONTCARE, we use the string “dont care” to take advantage of the semantics of the phrase “don’t care” that the pretrained BERT encoder would have already learned. The input to BERT is the sum of the embeddings of the input tokens Xt, segment id embeddings, and position embeddings. For the segment id, we use 0 for the tokens that belong to Dt−1 and 1 for the tokens that belong to Dt or Bt−1. The position embeddings follow the standard choice of BERT. Encoder Output The output representation of the encoder is Ht ∈R|Xt|×d, and h[CLS] t , h[SLOT]j t ∈ Rd are the outputs that correspond to [CLS] and [SLOT]j, respectively. hX t , the aggregated sequence representation of the entire input Xt, is obtained by a feed-forward layer with a learnable parameter Wpool ∈Rd×d as: hX t = tanh(Wpool h[CLS] t ). State Operation Prediction State operation prediction is a four-way classification performed on top of the encoder output for each slot representation h[SLOT]j t : P j opr,t = softmax(Wopr h[SLOT]j t ), where Wopr ∈R|O|×d is a learnable parameter and P j opr,t ∈R|O| is the probability distribution over operations for the j-th slot at turn t. In our formulation, |O| = 4, because O = {CARRYOVER, DELETE, DONTCARE, UPDATE}. Then, the operation is determined by rj t = argmax(P j opr,t) and the slot value generation is performed on only the slots whose operation is UPDATE. We define the set of the slot indices which require the value generation as Ut = {j | rj t = UPDATE}, and its size as J′ t = |Ut|. 4We use only the previous turn dialogue utterances Dt−1 as the dialogue history, i.e., the size of the dialogue history is 1. This is because our model assumes Markov property in dialogues as a part of the input, the previous turn dialogue state Bt−1, can serve as a compact representation of the whole dialogue history. 571 3.2 Slot Value Generator For each j-th slot such that j ∈Ut, the slot value generator generates a value. Our slot value generator differs from the generators of many of the previous works because it generates the values for only J′ t number of slots, not J. In most cases, J′ t ≪J, so this setup enables an efficient computation where only a small number of slot values are newly generated. We use Gated Recurrent Unit (GRU) (Cho et al., 2014b) decoder like Wu et al. (2019). GRU is initialized with gj,0 t = hX t and ej,0 t = h[SLOT]j t , and recurrently updates the hidden state gj,k t ∈Rd by taking a word embedding ej,k t as the input until [EOS] token is generated: gj,k t = GRU(gj,k−1 t , ej,k t ). The decoder hidden state is transformed to the probability distribution over the vocabulary at the k-th decoding step, where E ∈Rdvcb×d is the word embedding matrix shared across the encoder and the decoder, such that dvcb is the vocabulary size. P j,k vcb,t = softmax(E gj,k t ) ∈Rdvcb. As the work of Wu et al. (2019), we use the softgated copy mechanism (See et al., 2017) to get the final output distribution P j,k val,t over the candidate value tokens: P j,k ctx,t = softmax(Ht gj,k t ) ∈R|Xt|, P j,k val,t = αP j,k vcb,t + (1 −α)P j,k ctx,t, such that α is a scalar value computed as: α = sigmoid(W1 [gj,k t ; ej,k t ; cj,k t ]), where W1 ∈R1×(3d) is a learnable parameter and cj,k t = P j,k ctx,t Ht ∈Rd is a context vector. 3.3 Objective Function During training, we jointly optimize both state operation predictor and slot value generator. State Operation Predictor In addition to the state operation classification, we use domain classification as an auxiliary task to force the model to learn the correlation of slot operations and domain transitions in between dialogue turns. Domain classification is done with a softmax layer on top of hX t : Pdom,t = softmax(Wdom hX t ), where Wdom ∈Rddom×d is a learnable parameter and Pdom,t ∈Rddom is the probability distribution over domains at turn t. ddom is the number of domains defined in the dataset. The loss for each of state operation classification and domain classification is the average of the negative log-likelihood, as follows: Lopr,t = −1 J J X j=1 (Y j opr,t)⊺log P j opr,t, Ldom,t = −(Ydom,t)⊺log Pdom,t, where Ydom,t ∈Rddom is the one-hot vector for the ground truth domain and Y j opr,t ∈R|O| is the one-hot vector for the ground truth operation for the j-th slot. Slot Value Generator The objective function to train slot value generator is also the average of the negative log-likelihood: Lsvg,t = −1 |Ut| X j∈Ut 1 Kj t Kj t X k=1 (Y j,k val,t)⊺log P j,k val,t, where Kj t is the number of tokens of the ground truth value that needs to be generated for the j-th slot. Y j,k val,t ∈Rdvcb is the one-hot vector for the ground truth token that needs to be generated for the j-th slot at the k-th decoding step. Therefore, the final joint loss Ljoint,t to be minimized at dialogue turn t is the sum of the losses mentioned above: Ljoint,t = Lopr,t + Ldom,t + Lsvg,t. 4 Experimental Setup 4.1 Datasets We use MultiWOZ 2.0 (Budzianowski et al., 2018) and MultiWOZ 2.1 (Eric et al., 2019) as the datasets in our experiments. These datasets are two of the largest publicly available multi-domain taskoriented dialogue datasets, including about 10,000 dialogues within seven domains. MultiWOZ 2.1 is a refined version of MultiWOZ 2.0 in which the annotation errors are corrected.5 Following Wu et al. (2019), we use only five domains (restaurant, train, hotel, taxi, attraction) 5See Table 8 in Appendix A for more details of MultiWOZ 2.1. 572 excluding hospital and police.6 Therefore, the number of domains ddom is 5 and the number of slots J is 30 in our experiments. We use the script provided by Wu et al. (2019) to preprocess the datasets.7 4.2 Training We employ the pretrained BERT-base-uncased model8 for state operation predictor and one GRU (Cho et al., 2014b) for slot value generator. The hidden size of the decoder is the same as that of the encoder, d, which is 768. The token embedding matrix of slot value generator is shared with that of state operation predictor. We use BertAdam as our optimizer (Kingma and Ba, 2015). We use greedy decoding for slot value generator. The encoder of state operation predictor makes use of a pretrained model, whereas the decoder of slot value generator needs to be trained from scratch. Therefore, we use different learning rate schemes for the encoder and the decoder. We set the peak learning rate and warmup proportion to 4e-5 and 0.1 for the encoder and 1e-4 and 0.1 for the decoder, respectively. We use a batch size of 32 and set the dropout (Srivastava et al., 2014) rate to 0.1. We also utilize word dropout (Bowman et al., 2016) by randomly replacing the input tokens with the special [UNK] token with the probability of 0.1. The max sequence length for all inputs is fixed to 256. We train state operation predictor and slot value generator jointly for 30 epochs and choose the model that reports the best performance on the validation set. During training, we use the ground truth state operations and the ground truth previous turn dialogue state instead of the predicted ones. When the dialogue state is fed to the model, we randomly shuffle the slot order with a rate of 0.5. This is to make state operation predictor exploit the semantics of the slot names and not rely on the position of the slot tokens or a specific slot order. During inference or when the slot order is not shuffled, the slots are sorted alphabetically. We use teacher forcing 50% of the time to train the decoder. All experiments are performed on NAVER Smart Machine Learning (NSML) platform (Sung et al., 2017; Kim et al., 2018). All the reported results of SOM-DST are averages over ten runs. 6The excluded domains take up only a small portion of the dataset and do not even appear in the test set. 7github.com/jasonwu0731/trade-dst 8github.com/huggingface/transformers 4.3 Baseline Models We compare the performance of SOM-DST with both predefined ontology-based models and open vocabulary-based models. FJST uses a bidirectional LSTM to encode the dialogue history and uses a feed-forward network to predict the value of each slot (Eric et al., 2019). HJST is proposed together with FJST; it encodes the dialogue history using an LSTM like FJST but uses a hierarchical network (Eric et al., 2019). SUMBT exploits BERT-base as the encoder for the dialogue context and slot-value pairs. After encoding them, it scores every candidate slot-value pair in a non-parametric manner using a distance measure (Lee et al., 2019). HyST employs a hierarchical RNN encoder and takes a hybrid approach that incorporates both a predefined ontology-based setting and an open vocabulary-based setting (Goel et al., 2019). DST Reader formulates the problem of DST as an extractive QA task; it uses BERT-base to make the contextual word embeddings and extracts the value of the slots from the input as a span (Gao et al., 2019). TRADE encodes the whole dialogue context with a bidirectional GRU and decodes the value for every slot using a copy-augmented GRU decoder (Wu et al., 2019). COMER uses BERT-large as a feature extractor and a hierarchical LSTM decoder to generate the current turn dialogue state itself as the target sequence (Ren et al., 2019). NADST uses a Transformer-based nonautoregressive decoder to generate the current turn dialogue state (Le et al., 2020b). ML-BST uses a Transformer-based architecture to encode the dialogue context with the domain and slot information and combines the outputs in a late fusion approach. Then, it generates the slot values and the system response jointly (Le et al., 2020a). DS-DST uses two BERT-base encoders and takes a hybrid approach of predefined ontology-based DST and open vocabulary-based DST. It defines picklist-based slots for classification similarly to SUMBT and span-based slots for span extraction like DST Reader (Zhang et al., 2019). 573 Table 1: Joint goal accuracy on the test set of MultiWOZ 2.0 and 2.1. * indicates a result borrowed from Eric et al. (2019). HyST and DS-DST use a hybrid approach, partially taking advantage of the predefined ontology. † indicates the case where BERT-large is used for our model. MultiWOZ 2.0 MultiWOZ 2.1 Predefined Ontology HJST∗(Eric et al., 2019) 38.40 35.55 FJST∗(Eric et al., 2019) 40.20 38.00 SUMBT (Lee et al., 2019) 42.40 HyST∗(Goel et al., 2019) 42.33 38.10 DS-DST (Zhang et al., 2019) 51.21 DST-picklist (Zhang et al., 2019) 53.30 Open Vocabulary DST Reader∗(Gao et al., 2019) 39.41 36.40 TRADE∗(Wu et al., 2019) 48.60 45.60 COMER (Ren et al., 2019) 48.79 NADST (Le et al., 2020b) 50.52 49.04 ML-BST (Le et al., 2020a) 50.91 SOM-DST (ours) 51.72 53.01 SOM-DST† (ours) 52.32 53.68 DST-picklist is proposed together with DS-DST and uses a similar architecture, but it performs only predefined ontology-based DST considering all slots as picklist-based slots (Zhang et al., 2019). 5 Experimental Results 5.1 Joint Goal Accuracy Table 1 shows the joint goal accuracy of SOM-DST and other models on the test set of MultiWOZ 2.0 and MultiWOZ 2.1. Joint goal accuracy is an accuracy which checks whether all slot values predicted at a turn exactly match the ground truth values. As shown in the table, SOM-DST achieves state-of-the-art performance in an open vocabularybased setting. Interestingly, on the contrary to the previous works, our model achieves higher performance on MultiWOZ 2.1 than on MultiWOZ 2.0. This is presumably because our model, which explicitly uses the dialogue state labels as input, benefits more from the error correction on the state annotations done in MultiWOZ 2.1.9 9Eric et al. (2019) report that the correction of the annotations done in MultiWOZ 2.1 changes about 32% of the state annotations of MultiWOZ 2.0, which indicates that MultiWOZ 2.0 consists of many annotation errors. Table 2: Domain-specific results on the test set of MultiWOZ 2.1. Our model outperforms other models in taxi and train domains. Domain Model Joint Accuracy Slot Accuracy Attraction NADST 66.83 98.79 ML-BST 70.78 99.06 SOM-DST (ours) 69.83 98.86 Hotel NADST 48.76 97.70 ML-BST 49.52 97.50 SOM-DST (ours) 49.53 97.35 Restaurant NADST 65.37 98.78 ML-BST 66.50 98.76 SOM-DST (ours) 65.72 98.56 Taxi NADST 33.80 96.69 ML-BST 23.05 96.42 SOM-DST (ours) 59.96 98.01 Train NADST 62.36 98.36 ML-BST 65.12 90.22 SOM-DST (ours) 70.36 98.67 5.2 Domain-Specific Accuracy Table 2 shows the domain-specific results of our model and the concurrent works which report such results (Le et al., 2020a,b). Domain-specific accuracy is the accuracy measured on a subset of the predicted dialogue state, where the subset consists of the slots specific to a domain. While the performance is similar to or a little lower than that of other models in other domains, SOM-DST outperforms other models in taxi and train domains. This implies that the state-of-the-art joint goal accuracy of our model on the test set comes mainly from these two domains. A characteristic of the data from these domains is that they consist of challenging conversations; the slots of these domains are filled with more diverse values than other domains,10 and there are more than one domain changes, i.e., the user changes the conversation topic during a dialogue more than once. For a specific example, among the dialogues where the domain switches more than once, the number of conversations that end in taxi domain is ten times more than in other cases. A more detailed statistics are given in Table 10 in Appendix A. Therefore, we assume our model performs relatively more robust DST in such challenging conversations. We conjecture that this strength attributes to the effective utilization of the previous turn dialogue state in its explicit form, like using a memory; 10The statistics of the slot value vocabulary size are shown in Table 9 in Appendix A. 574 Table 3: Joint goal accuracy on the MultiWOZ 2.1 test set when the four-way state operation prediction changes to two-way, three-way, or six-way. State Operations Joint Accuracy 4 CARRYOVER, DELETE, 53.01 DONTCARE, UPDATE 2 CARRYOVER, NON-CARRYOVER 52.06 3 CARRYOVER, DONTCARE, UPDATE 52.63 3 CARRYOVER, DELETE, UPDATE 52.64 6 CARRYOVER, DELETE, 52.97 DONTCARE, UPDATE, YES, NO the model can explicitly keep even the information mentioned near the beginning of the conversation and directly copy the values from this memory whenever necessary. Figure 1 shows an example of a complicated conversation in MultiWOZ 2.1, where our model accurately predicts the dialogue state. More sample outputs of SOM-DST are provided in Appendix C. 6 Analysis 6.1 Choice of State Operations Table 3 shows the joint goal accuracy where the four-way state operation prediction changes to twoway, three-way, or six-way. The joint goal accuracy drops when we use twoway state operation prediction, which is a binary classification of whether to (1) carry over the previous slot value to the current turn or (2) generate a new value, like Gao et al. (2019). We assume the reason is that it is better to separately model operations DELETE, DONTCARE, and UPDATE that correspond to the latter class of the binary classification, since the values of DELETE and DONTCARE tend to appear implicitly while the values for UPDATE are often explicitly expressed in the dialogue. We also investigate the performance when only three operations are used or two more state operations, YES and NO, are used. YES and NO represent the cases where yes or no should be filled as the slot value, respectively. The performance drops in all of the cases. 6.2 Error Analysis Table 4 shows the joint goal accuracy of the combinations of the cases where the ground truth is used or not for each of the previous turn dialogue state, state operations at the current turn, and slot Table 4: Joint goal accuracy of the current and the ground truth-given situations. Relative error rate is the proportion of the error when 100% is set as the error where no ground truth is used for SOP and SVG. (GT: Ground Truth, SOP: State Operation Prediction, SVG: Slot Value Generation, Pred: Predicted) GT GT Joint Relative SOP SVG Accuracy Error Rate Pred Bt−1 (w/ Error Propagation) 53.01 100.0 ✓ 56.37 92.85 ✓ 89.85 21.60 ✓ ✓ 100.0 0.00 GT Bt−1 (w/o Error Propagation) 81.00 100.0 ✓ 82.80 90.53 ✓ 96.27 19.63 ✓ ✓ 100.0 0.00 values for UPDATE at the current turn. From this result, we analyze which of state operation predictor and slot value generator is more responsible for the error in the joint goal prediction, under the cases where error propagation occurs or not. Among the absolute error of 46.99% made under the situation that error propagation occurs, i.e., the dialogue state predicted at the previous turn is fed to the model, it could be argued that 92.85% comes from state operation predictor, 21.6% comes from slot value generator, and 14.45% comes from both of the components. This indicates that at least 78.4% to 92.85% of the error comes from state operation predictor, and at least 7.15% to 21.6% of the error comes from slot value generator. 11 Among the absolute error of 19% made under the error propagation-free situation, i.e., ground truth previous turn dialogue state is fed to the model, it could be argued that 90.53% comes from state operation predictor, 19.63% comes from slot value generator, and 10.16% comes from both of the components. This indicates that at least 80.37% to 90.53% of the error comes from state operation predictor, and at least 9.47% to 19.63% of the error comes from slot value generator. . Error propagation that comes from using the dialogue state predicted at the previous turn increases the error 2.47 (=100−53.01 100−81.00) times. Both with and without error propagation, a relatively large amount 11The calculation of the numbers in the paragraph is done as follows. (The figures in the paragraph immediately below are calculated in the same way.) 100 −53.01 = 46.99 92.85 + 21.6 −100 = 14.45 (100 −56.37)/46.99 = 92.85 92.85 −14.45 = 78.4 (100 −89.85)/46.99 = 21.6 21.6 −14.45 = 7.15 575 Table 5: Statistics of the number of state operations and the corresponding F1 scores of our model in MultiWOZ 2.1. # Operations F1 score Operation Type Train Valid Test Test CARRYOVER 1,584,757 212,608 212,297 98.66 UPDATE 61,628 8,287 8,399 80.10 DONTCARE 1,911 155 235 32.51 DELETE 1,224 80 109 2.86 Table 6: The minimum, average, and maximum number of slots whose values are generated at a turn, calculated on the test set of MultiWOZ 2.1. Model Min # Avg # Max # TRADE 30 30 30 ML-BST 30 30 30 COMER 0 5.72 18 SOM-DST (ours) 0 1.14 9 Table 7: Average inference time per dialogue turn of MultiWOZ 2.1 test set, measured on Tesla V100 with a batch size of 1. † indicates the case where BERT-large is used for our model. Model Joint Accuracy Latency TRADE 45.60 340 ms NADST 49.04 26 ms SOM-DST (ours) 53.01 27 ms SOM-DST† (ours) 53.68 40 ms of error comes from state operation predictor, implying that a large room for improvement currently exists in this component. Improving the state operation prediction accuracy, e.g., by tackling the class imbalance shown in Table 5, may have the potential to increase the overall DST performance by a large margin. 6.3 Efficiency Analysis In Table 6, we compare the number of slot values generated at a turn among various open vocabularybased DST models that use an autoregressive decoder. The maximum number of slots whose values are generated by our model at a turn, i.e., the number of slots on which UPDATE should be performed, is 9 at maximum and only 1.14 on average in the test set of MultiWOZ 2.1. On the other hand, TRADE and ML-BST generate the values of all the 30 slots at every turn of a dialogue. COMER generates only a subset of the slot values like our model, but it generates the values of all the slots that have a non-NULL value at a turn, which is 18 at maximum and 5.72 on average. Table 7 shows the latency of SOM-DST and several other models. We measure the inference time for a dialogue turn of MultiWOZ 2.1 on Tesla V100 with a batch size of 1. The models used for comparison are those with official public implementations. It is notable that the inference time of SOMDST is about 12.5 times faster than TRADE, which consists of only two GRUs. Moreover, the latency of SOM-DST is compatible with that of NADST, which explicitly uses non-autoregressive decoding, while SOM-DST achieves much higher joint goal accuracy. This shows the efficiency of the proposed selectively overwriting mechanism of SOM-DST, which generates only the minimal slot values at a turn. In Appendix B, we also investigate Inference Time Complexity (ITC) proposed in the work of Ren et al. (2019), which defines the efficiency of a DST model using J, the number of slots, and M, the number of values of a slot. 7 Conclusion We propose SOM-DST, an open vocabulary-based dialogue state tracker that regards dialogue state as an explicit memory that can be selectively overwritten. SOM-DST decomposes dialogue state tracking into state operation prediction and slot value generation. This setup makes the generation process efficient because the values of only a minimal subset of the slots are generated at each dialogue turn. SOM-DST achieves state-of-the-art joint goal accuracy on both MultiWOZ 2.0 and MultiWOZ 2.1 datasets in an open vocabulary-based setting. SOMDST effectively makes use of the explicit dialogue state and discrete operations to perform relatively robust DST even in complicated conversations. Further analysis shows that improving state operation prediction has the potential to increase the overall DST performance dramatically. From this result, we propose that tackling DST with our proposed problem definition is a promising future research direction. Acknowledgments The authors would like to thank the members of Clova AI for proofreading this manuscript. 576 References Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21, Berlin, Germany. Association for Computational Linguistics. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Inigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. Multiwoz - a largescale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling. In EMNLP. Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang. 2017. A survey on dialogue systems: Recent advances and new frontiers. ACM SIGKDD Explorations Newsletter, 19(2):25–35. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014a. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In EMNLP. Kyunghyun Cho, Bart Van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014b. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyag Gao, and Dilek HakkaniTur. 2019. Multiwoz 2.1: Multi-domain dialogue state corrections and state tracking baselines. arXiv preprint arXiv:1907.01669. Shuyang Gao, Abhishek Sethi, Sanchit Agarwal, Tagyoung Chung, and Dilek Hakkani-Tur. 2019. Dialog state tracking: A neural reading comprehension approach. In SIGDIAL. Rahul Goel, Shachi Paul, and Dilek Hakkani-T¨ur. 2019. Hyst: A hybrid approach for flexible and accurate dialogue state tracking. In Interspeech. Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014. The second dialog state tracking challenge. In SIGDIAL. Hanjoo Kim, Minkyu Kim, Dongjoo Seo, Jinwoong Kim, Heungseok Park, Soeun Park, Hyunwoo Jo, KyungHyun Kim, Youngil Yang, Youngkwan Kim, et al. 2018. Nsml: Meet the mlaas platform with a real-world case study. arXiv preprint arXiv:1810.09957. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Hung Le, Doyen Sahoo, Chenghao Liu, Nancy F. Chen, and Steven C.H. Hoi. 2020a. End-to-end multidomain task-oriented dialogue systems with multilevel neural belief tracker. In Submitted to ICLR 2020. Hung Le, Richard Socher, and Steven C.H. Hoi. 2020b. Non-autoregressive dialog state tracking. In ICLR. Hwaran Lee, Jinsik Lee, and Tae-Yoon Kim. 2019. Sumbt: Slot-utterance matching for universal and scalable belief tracking. In ACL. Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1437–1447, Melbourne, Australia. Association for Computational Linguistics. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In ACL. Elnaz Nouri and Ehsan Hosseini-Asl. 2018. Toward scalable neural dialogue state tracking model. In 2nd Conversational AI workshop on NeurIPS 2018. Liliang Ren, Jianmo Ni, and Julian McAuley. 2019. Scalable and accurate dialogue state tracking via hierarchical sequence generation. In EMNLPIJCNLP. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In ACL. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929–1958. Nako Sung, Minkyu Kim, Hyunwoo Jo, Youngil Yang, Jingwoong Kim, Leonard Lausen, Youngkwan Kim, Gayoung Lee, Donghyun Kwak, Jung-Woo Ha, et al. 2017. Nsml: A machine learning platform that enables you to focus on your models. arXiv preprint arXiv:1712.05902. Tsung-Hsien Wen, David Vandyke, Nikola Mrkˇsi´c, Milica Gasic, Lina M Rojas Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In EACL. Chien-Sheng Wu, Andrea Madotto, Ehsan HosseiniAsl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state generator for task-oriented dialogue systems. In ACL. Puyang Xu and Qi Hu. 2018. An end-to-end approach for handling unknown slot values in dialogue state tracking. In ACL. 577 Jian-Guo Zhang, Kazuma Hashimoto, Chien-Sheng Wu, Philip S. Yu, Richard Socher, and Caiming Xiong. 2019. Find or classify? dual strategy for slotvalue predictions on multi-domain dialog state tracking. arXiv preprint arXiv:1910.03544. Victor Zhong, Caiming Xiong, and Richard Socher. 2018. Global-locally self-attentive encoder for dialogue state tracking. In ACL. 578 A Data Statistics Table 8: Data Statistics of MultiWOZ 2.1. # of Dialogues # of Turns Domain Slots Train Valid Test Train Valid Test Attraction area, name, type 2,717 401 395 8,073 1,220 1,256 Hotel price range, type, parking, book stay, book day, book people, area, stars, internet, name 3,381 416 394 14,793 1,781 1,756 Restaurant food, price range, area, name, book time, book day, book people 3,813 438 437 15,367 1,708 1,726 Taxi leave at, destination, departure, arrive by 1,654 207 195 4,618 690 654 Train destination, day, departure, arrive by, book people, leave at 3,103 484 494 12,133 1,972 1,976 Table 9: Statistics of the slot value vocabulary size in MultiWOZ 2.1. Slot Value Vocabulary Size Slot Name Train Valid Test taxi-destination 373 213 213 taxi-departure 357 214 203 restaurant-name 202 162 162 attraction-name 186 145 149 train-leaveat 146 69 117 train-arriveby 112 64 101 restaurant-food 111 81 70 taxi-leaveat 105 68 65 hotel-name 93 65 58 restaurant-book time 64 50 51 taxi-arriveby 95 49 46 train-destination 27 25 24 train-departure 34 23 23 attraction-type 31 17 17 train-book people 11 9 9 hotel-book people 8 8 8 restaurant-book people 9 8 8 hotel-book day 13 7 7 hotel-stars 9 7 7 restaurant-book day 10 7 7 train-day 8 7 7 attraction-area 7 6 6 hotel-area 7 6 6 restaurant-area 7 6 6 hotel-book stay 10 5 5 hotel-parking 4 4 4 hotel-pricerange 7 5 4 hotel-type 5 5 4 restaurant-pricerange 5 4 4 hotel-internet 3 3 3 579 Table 10: Statistics of domain transition in the test set of MultiWOZ 2.1. There are 140 dialogues with more than one domain transition that end with taxi domain. The cases where domain switches more than once and ends in taxi are shown in bold. The total number of dialogues with more than one domain transition is 175. We can view these as complicated dialogues. Domain Transition First Second Third Fourth Count restaurant train 87 attraction train 80 hotel 71 train attraction 71 train hotel 70 restaurant 64 train restaurant 62 hotel train 57 taxi 51 attraction restaurant 38 restaurant attraction taxi 35 restaurant attraction 31 train 31 hotel attraction 27 restaurant hotel 27 restaurant hotel taxi 26 attraction hotel taxi 24 attraction restaurant taxi 23 hotel restaurant 22 attraction hotel 20 hotel attraction taxi 16 hotel restaurant taxi 13 attraction 12 attraction restaurant train 3 restaurant hotel train 3 hotel train restaurant 3 restaurant train hotel 3 restaurant taxi hotel 3 attraction train restaurant 2 train attraction restaurant 2 attraction restaurant hotel 2 hotel train attraction 2 attraction taxi hotel 1 hotel taxi 1 train hotel restaurant 1 restaurant taxi 1 restaurant train taxi 1 hotel restaurant train 1 hotel taxi train 1 taxi attraction 1 restaurant train attraction 1 attraction train hotel 1 attraction train taxi 1 restaurant attraction train 1 hotel taxi attraction 1 train hotel attraction 1 restaurant taxi attraction 1 hotel attraction restaurant taxi 1 attraction hotel train 1 taxi restaurant train 1 580 B Inference Time Complexity (ITC) Table 11: Inference Time Complexity (ITC) of each model. We report the ITC in both the best case and the worst case for more precise comparison. J indicates the number of slots, and M indicates the number of values of a slot. Inference Time Complexity Model Best Worst SUMBT Ω(JM) O(JM) DS-DST Ω(J) O(JM) DST-picklist Ω(JM) O(JM) DST Reader Ω(1) O(J) TRADE Ω(J) O(J) COMER Ω(1) O(J) NADST Ω(1) O(1) ML-BST Ω(J) O(J) SOM-DST(ours) Ω(1) O(J) Inference Time Complexity (ITC) proposed by Ren et al. (2019) defines the efficiency of a DST model using J, the number of slots, and M, the number of values of a slot. Going a step further from their work, we report ITC of the models in the best case and the worst case for relatively more precise comparison. Table 11 shows ITC of several models in their best and worst cases. Since our model generates values for only the slots on which UPDATE operation has to be performed, the best case complexity of our model is Ω(1), when there is no slot whose operation is UPDATE. 581 C Sample Outputs Figure 3: The output of SOM-DST in a dialogue (dialogue idx MUL2499) in the test set of MultiWOZ 2.1. Parts changed from the previous dialogue state are shown in blue. To save space, we omit the slots with value NULL from the figure. 582 Figure 4: The output of SOM-DST in a dialogue (dialogue idx PMUL3748) in the test set of MultiWOZ 2.1. Parts changed from the previous dialogue state are shown in blue. To save space, we omit the slots with value NULL from the figure.
2020
53
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5971–5978 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5971 Contextual Neural Machine Translation Improves Translation of Cataphoric Pronouns KayYen Wong† Sameen Maruf‡ Faculty of Information Technology, Monash University, VIC, Australia †[email protected][email protected] Gholamreza Haffari‡ Abstract The advent of context-aware NMT has resulted in promising improvements in the overall translation quality and specifically in the translation of discourse phenomena such as pronouns. Previous works have mainly focused on the use of past sentences as context with a focus on anaphora translation. In this work, we investigate the effect of future sentences as context by comparing the performance of a contextual NMT model trained with the future context to the one trained with the past context. Our experiments and evaluation, using generic and pronoun-focused automatic metrics, show that the use of future context not only achieves significant improvements over the context-agnostic Transformer, but also demonstrates comparable and in some cases improved performance over its counterpart trained on past context. We also perform an evaluation on a targeted cataphora test suite and report significant gains over the contextagnostic Transformer in terms of BLEU. 1 Introduction Standard machine translation (MT) systems typically translate sentences in isolation, ignoring essential contextual information, where a word in a sentence may reference other ideas or expressions within a piece of text. This locality assumption hinders the accurate translation of referential pronouns, which rely on surrounding contextual information to resolve cross-sentence references. The issue is further exacerbated by differences in pronoun rules between source and target languages, often resulting in morphological disagreement in the quantity and gender of the subject being referred to (Vanmassenhove et al., 2018). Rapid improvements in NMT have led to it replacing SMT as the dominant paradigm. With this, context-dependent NMT has gained traction, overcoming the locality assumption in SMT through the use of additional contextual information. This has led to improvements in not only the overall translation quality but also pronoun translation (Jean et al., 2017; Bawden et al., 2018; Voita et al., 2018; Miculicich et al., 2018). However, all these works have neglected the context from future sentences, with Voita et al. (2018) reporting it to have a negative effect on the overall translation quality. In this work, we investigate the effect of future context in improving NMT performance. We particularly focus on pronouns and analyse corpora from different domains to discern if the future context could actually aid in their resolution. We find that for the Subtitles domain roughly 16% of the pronouns are cataphoric. This finding motivates us to investigate the performance of a context-dependent NMT model (Miculicich et al., 2018) trained on the future context in comparison to its counterpart trained on the past context. We evaluate our models in terms of overall translation quality (BLEU) and also employ three types of automatic pronountargeted evaluation metrics. We demonstrate strong improvements for all metrics, with the model using future context showing comparable or in some cases even better performance than the one using only past context. We also extract a targeted cataphora test set and report significant gains on it with the future context model over the baseline. 2 Related Work Pronoun-focused SMT Early work in the translation of pronouns in SMT attempted to exploit coreference links as additional context to improve the translation of anaphoric pronouns (Le Nagard and Koehn 2010; Hardmeier and Federico 2010). These works yielded mixed results which were attributed to the limitations of the coreference resolution systems used in the process (Guillou, 2012). 5972 Domain #Sentences Document length English-German Subtitles 9.39M/9K/14.1K 565.8/582.2/591.0 Europarl 1.67M/3.6K/5.1K 14.1/15.0/14.1 TED Talks 0.21M/9K/2.3K 120.9/96.4/98.7 English-Portuguese Subtitles 15.2M/16.1K/23.6K 580.4/620.6/605.3 Table 1: Train/dev/test statistics: number of sentences (K: thousands, M: millions), and average document length (in sentences). The #Documents can be obtained by dividing the #Sentences by the Document Length. Context-Aware NMT Multiple works have successfully demonstrated the advantages of using larger context in NMT, where the context comprises few previous source sentences (Wang et al., 2017; Zhang et al., 2018), few previous source and target sentences (Miculicich et al., 2018), or both past and future source and target sentences (Maruf and Haffari, 2018; Maruf et al., 2018, 2019). Further, context-aware NMT has demonstrated improvements in pronoun translation using past context, through concatenating source sentences (Tiedemann and Scherrer, 2017) or through an additional context encoder (Jean et al., 2017; Bawden et al., 2018; Voita et al., 2018). Miculicich et al. (2018) observed reasonable improvements in generic and pronoun-focused translation using three previous sentences as context. Voita et al. (2018) observed improvements using the previous sentence as context, but report decreased BLEU when using the following sentence. We, on the other hand, observe significant gains in BLEU when using the following sentence as context on the same data domain. 3 Contextual Analysis of Corpora To motivate our use of the future context for improving the translation of cataphoric pronouns in particular and NMT in general, we first analyse the distribution of coreferences for anaphoric and cataphoric pronouns over three different corpora OpenSubtitles20181 (Lison and Tiedemann, 2016), Europarl (Koehn, 2005) and TED Talks (Cettolo et al., 2012) - for English-German. For Europarl and TED Talks, we use the publicly available document-aligned version of the corpora (Maruf et al., 2019). For Subtitles, we align the English and German subtitles at the document-level using publicly available alignment links.2 To control for the length and coherency of documents, we keep 1http://www.opensubtitles.org/ 2http://opus.nlpl.eu/OpenSubtitles2018.php Pronoun Subtitles Europarl TED Talks Intrasentential 30.1 75.6 64.1 Anaphora (< 0) 54.3 19.6 28.5 Cataphora (> 0) 15.6 4.7 7.4 Table 2: Percentage of different pronoun types. 1 2 3 4 5 6 7 8 9 10 0 10 20 30 40 %age of occurences Anaphora Cataphora Figure 1: Plot showing proportion of intersentential English pronouns versus size of coreference resolution window for the Subtitles corpus (plots for Europarl and TED Talks are in the appendix). subtitles with a run-time less than 50 minutes (for English) and those with number of sentences in the hundreds. The corpus is then randomly split into training, development and test sets in the ratio 100:1:1.5. Table 1 presents the corpora statistics. Analysis of Coreferences We find the smallest window within which a referential English pronoun is resolved by an antecedent or postcedent using NeuralCoref.3 Table 2 shows that the majority of pronouns in Europarl and TED Talks corpora are resolved intrasententially, while the Subtitles corpus demonstrates a greater proportion of intersentential coreferences. Further, anaphoric pronouns are much more frequent compared to cataphoric ones across all three corpora. For Subtitles, we also note that a good number of pronouns (15.6%) are cataphoric, ∼37% of which are resolved within the following sentence (Figure 1). This finding motivates us to investigate the performance of a context-aware NMT model (trained on Subtitles) for the translation of cataphoric pronouns. 4 Experiments Datasets We experiment with the Subtitles corpus on English-German and English-Portuguese language-pairs. To obtain English-Portuguese data, we employ the same pre-processing steps as reported in §3 (corpus statistics are in Table 1). We use 80% of the training data to train our models and the rest is held-out for further evaluation as discussed later in § 4.2.4 The data is truecased using 3https://github.com/huggingface/neuralcoref 4Due to resource contraints, we use about two-thirds of the final training set (∼8M sentence-pairs) for En-Pt. 5973 Lang. Pair Baseline HAN(k = +1) HAN(k = -1) English→German 31.87 32.53 32.48 German→English 35.92 36.64♠ 36.32 English→Portuguese 35.45 36.04 36.21 Portuguese→English 39.34 39.96♠ 39.69 Table 3: BLEU for the Transformer baseline and Transformer-HAN with following sentence (k = +1) and previous sentence (k = -1). ♠: Statistically significantly better than HAN (k = -1). the Moses toolkit (Koehn et al., 2007) and split into subword units using a joint BPE model with 30K merge operations (Sennrich et al., 2016).5 Description of the NMT systems As our baseline, we use the DyNet (Neubig et al., 2017) implementation of Transformer (Vaswani et al., 2017).6 For the context-dependent NMT model, we choose the Transformer-HAN encoder (Miculicich et al., 2018), which has demonstrated reasonable performance for anaphoric pronoun translation on Subtitles. We extend its DyNet implementation (Maruf et al., 2019) to a single context sentence.78 For training, Transformer-HAN is initialised with the baseline Transformer and then the parameters of the whole network are optimised in a second stage as in Miculicich et al. (2018) (details of model configuration are in Appendix A.1). For evaluation, we compute BLEU (Papineni et al., 2002) on tokenised truecased text and measure statistical significance with p < 0.005 (Clark et al., 2011). 4.1 Results We consider two versions of the Transformer-HAN respectively trained with the following and previous source sentence as context. From Table 3, we note both context-dependent models to significantly outperform the Transformer across all language-pairs in terms of BLEU. Further, HAN (k = +1) demonstrates statistically significant improvements over the HAN (k = -1) when translating to English. These results are quite surprising as Voita et al. (2018) report decreased translation quality in terms of BLEU when using the following sentence for English→Russian Subtitles. To 5Tokenisation is provided by the original corpus. 6https://github.com/duyvuleo/ Transformer-DyNet 7Where in the original architecture, k sentence-context vectors were summarised into a document-context vector, we omit this step when using only one sentence in context. 8The code and data are available at https://github. com/sameenmaruf/acl2020-contextnmt-cataphora. English→German Model APT Precision Recall F1-score CRC Baseline 60.8 47.4 54.3 50.7 +HAN(k = +1) 61.4 48.3 54.3 51.1 +HAN(k = -1) 62.0 48.0 54.6 51.1 German→English Model APT Precision Recall F1-score CRC Baseline 77.9 56.9 50.4 53.4 50.4 +HAN(k = +1) 78.3 57.9 50.6 54.0 50.9 +HAN(k = -1) 78.3 58.0 50.5 54.0 51.0 English→Portuguese Model APT Precision Recall F1-score CRC Baseline 46.4 54.8 56.0 55.4 +HAN(k = +1) 47.0 55.8 55.2 55.5 +HAN(k = -1) 47.3 56.0 55.4 55.7 Portuguese→English Model APT Precision Recall F1-score CRC Baseline 64.3 54.9 51.1 53.0 50.2 +HAN(k = +1) 64.6 55.7 51.5 53.5 50.9 +HAN(k = -1) 64.3 55.6 51.2 53.4 51.6 Table 4: Pronoun-focused evaluation on generic test set for models trained on different types of context. identify if this discrepancy is due to the languagepair or the model, we conduct experiments with English→Russian in the same data setting as Voita et al. (2018) and find that HAN (k = +1) still significantly outperforms the Transformer and is comparable to HAN (k = -1) (more details in Appendix A.2). 4.2 Analysis Pronoun-Focused Automatic Evaluation For the models in Table 3, we employ three types of pronoun-focused automatic evaluation: 1. Accuracy of Pronoun Translation (APT) (Miculicich Werlen and Popescu-Belis, 2017)9. This measures the degree of overlapping pronouns between the output and reference translations obtained via word-alignments. 2. Precision, Recall and F1 scores. We use a variation of AutoPRF (Hardmeier and Federico, 2010) to calculate precision, recall and F1-scores. For each source pronoun, we compute the clipped count (Papineni et al., 2002) of overlap between candidate and reference translations. To eliminate word alignment errors, we compare this overlap over the set of dictionarymatched target pronouns, in contrast to the set of target words aligned to a given source pronoun as done by AutoPRF and APT. 3. Common Reference Context (CRC) (Jwalapuram et al., 2019). In addition to the previous 9https://github.com/idiap/APT 5974 Model BLEU APT F1-score Baseline (Transformer) 31.87 61.6 49.1 +HAN(k = ∅) 32.30 61.6 49.1 +HAN(k = +1,+2) 32.56♠62.0 49.8 +HAN(k = -2,-1) 32.47 61.9 49.8 +HAN(k = -2,-1,+1,+2) 32.59♠62.0 49.9 Table 5: Evaluation on English→German generic test set for HAN trained with k = {-2,-1,+1,+2} but decoded with varying context. ♠: Statistically significantly better than HAN with no context (k = ∅). two measures which rely on computing pronoun overlap between the target and reference translation, we employ an ELMo-based (Peters et al., 2018) evaluation framework that distinguishes between a good and a bad translation via pairwise ranking (Jwalapuram et al., 2019). We use the CRC setting of this metric which considers the same reference context (one previous and one next sentence) for both reference and system translations. However, this measure is limited to evaluation only on the English target-side.10 The results using the aforementioned pronoun evaluation metrics are reported in Table 4. We observe improvements for all metrics with both HAN models in comparison to the baseline. Further, we observe that the HAN (k = +1) is either comparable to or outperforms HAN (k = -1) on APT and F1 for De→En and Pt→En, suggesting that for these cases, the use of following sentence as context is at least as beneficial as using the previous sentence. For En→De, we note comparable performance for the HAN variants in terms of F1, while for En→Pt, the past context appears to be more beneficial.11 In terms of CRC, we note HAN (k = -1) to be comparable to (De→En) or better than HAN (k = +1) (Pt→En). We attribute this to the way the metric is trained to disambiguate pronoun translations based on only the previous context and thus may have a bias for such scenarios. Ablation Study We would like to investigate whether a context-aware NMT model trained on a wider context could perform well even if we do not have access to the same amount of context at decoding. We thus perform an ablation study for 10We use the same English pronoun list for all pronounfocused metrics (provided by Jwalapuram et al. (2019) at https://github.com/ntunlp/eval-anaphora). All pronoun sets used in our evaluation are provided in Appendix A.4. 11It should be noted that for Portuguese, adjectives and even verb forms can be marked by the gender of the noun and these are hard to account for in automatic pronoun-focused evaluations. English→German Model Cataphora DET PROPN NOUN Baseline 32.33 32.14 33.02 32.93 +HAN(k = +1) 32.93♠ 32.68♠33.98♠33.76♠ German→English Model Cataphora DET PROPN NOUN Baseline 36.91 36.35 38.81 38.84 +HAN(k = +1) 37.68♠ 37.19♠ 39.51 39.45 English→Portuguese Model Cataphora DET PROPN NOUN Baseline 36.29 35.91 37.91 37.60 +HAN(k = +1) 37.08♠ 36.70♠ 38.49 38.19 Portuguese→English Model Cataphora DET PROPN NOUN Baseline 40.74 40.12 42.77 42.63 +HAN(k = +1) 41.63♠ 41.06♠43.60♠43.42♠ Table 6: BLEU on the cataphora test set and its subsets for the Transformer and Transformer-HAN (k = +1). ♠: Statistically significantly better than the baseline. English→German using the HAN model trained with two previous and next sentences as context and decoded with variant degrees of context. From Table 5, we note that reducing the amount of context at decoding time does not have adverse effect on the model’s performance. However, when no context is used, there is a statistically significant drop in BLEU, while APT and F1-scores are equivalent to that of the baseline. This suggests that the model does rely on the context to achieve the improvement in pronoun translation. Further, we find that the future context is just as beneficial as the past context in improving general translation performance. Cataphora-Focused Test Suite To gauge if the improvements in Table 3 for the HAN (k = +1) model are coming from the correct translation of cataphoric pronouns, we perform an evaluation on a cataphoric pronoun test suite constructed from the held-out set mentioned earlier in § 3. To this end, we apply NeuralCoref over the English side to extract sentence-pairs which have a cataphoric pronoun in one sentence and the postcedent in the next sentence. This is further segmented into subsets based on the part-of-speech of the postcedent, that is, determiner (DET), proper noun (PROPN) or all nouns (NOUN) (more details in the appendix).12 From Table 6, we observe HAN (k = +1) to outperform the baseline for all language-pairs when evaluated on the cataphora test suite. In general, we observe greater improvements in BLEU when trans12We note that there may be some overlap between the three pronoun subsets as a test sentence may contain more than one type of pronoun. 5975 Figure 2: Example attention map between source (yaxis) and context (x-axis). The source pronoun he correctly attends to the postcedents Richard in the context. lating to English, which we attribute to the simplification of cross-lingual pronoun rules when translating from German or Portuguese to English.13 We also observe fairly similar gains in BLEU across the different pronoun subsets, which we hypothesise to be due to potential overlap in test sentences between different subsets. Nevertheless, we note optimum translation quality over the noun subsets (PROPN and NOUN), while seeing the greatest percentage improvement on the DET subset. For the latter, we surmise that the model is able to more easily link pronouns in a sentence to subjects prefixed with possessive determiners, for example, “his son” or “their child”. We also perform an auxiliary evaluation for Transformer-HAN (k = -1) trained with the previous sentence as context on the cataphora test suite and find that the BLEU improvements still hold. Thus, we conclude that Transformer-HAN (a context-aware NMT model) is able to make better use of coreference information to improve translation of pronouns (detailed results in Appendix A.3). Qualitative Analysis We analyse the distribution of attention to the context sentence for a few test cases.14 Figure 2 shows an example in which a source pronoun he attends to its corresponding postcedent in context. This is consistent with our hypothesis that the HAN (k = +1) is capable of exploiting contextual information for the resolution of cataphoric pronouns. 5 Conclusions In this paper, we have investigated the use of future context for NMT and particularly for pronoun translation. While previous works have focused on the 13It should be noted that the cataphora test set is extracted based on the existence of cataphoric-pairs in the English-side, which may have biased the evaluation when English was in the target. 14Attention is average of the per-head attention weights. use of past context, we demonstrate through rigorous experiments that using future context does not deteriorate translation performance over a baseline. Further, it shows comparable and in some cases better performance as compared to using the previous sentence in terms of both generic and pronounfocused evaluation. In future work, we plan to investigate translation of other discourse phenomena that may benefit from the use of future context. Acknowledgments The authors are grateful to the anonymous reviewers for their helpful comments and feedback and to George Foster for fruitful discussions. This work is supported by a Google Faculty Research Award to G.H. It is further supported by the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) (www.massive.org.au). References Rachel Bawden, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2018. Evaluating discourse phenomena in neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 1304–1313, New Orleans, Louisiana, USA. Association for Computational Linguistics. Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. WIT3: Web inventory of transcribed and translated talks. In Proceedings of the 16th Conference of the European Association for Machine Translation (EAMT), pages 261–268, Trento, Italy. Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (Short Papers), pages 176–181. Association for Computational Linguistics. Liane Guillou. 2012. Improving pronoun translation for statistical machine translation. In Proceedings of the Student Research Workshop at the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 1–10, Avignon, France. Association for Computational Linguistics. Christian Hardmeier and Marcello Federico. 2010. Modelling pronominal anaphora in statistical machine translation. In Proceedings of the Seventh International Workshop on Spoken Language Translation (IWSLT), pages 283–289. Sebastien Jean, Stanislas Lauly, Orhan Firat, and Kyunghyun Cho. 2017. Does neural machine 5976 translation benefit from larger context? CoRR, abs/1704.05135. Prathyusha Jwalapuram, Shafiq Joty, Irina Temnikova, and Preslav Nakov. 2019. Evaluating pronominal anaphora in machine translation: An evaluation measure and a test suite. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 2957–2966, Hong Kong, China. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Conference Proceedings: the 10th Machine Translation Summit, pages 79–86. AAMT. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 177–180. Association for Computational Linguistics. Ronan Le Nagard and Philipp Koehn. 2010. Aiding pronoun translation with co-reference resolution. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 252–261, Uppsala, Sweden. Association for Computational Linguistics. Pierre Lison and J¨org Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from Movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation, pages 923–929, Portoroˇz, Slovenia. European Language Resources Association. Sameen Maruf and Gholamreza Haffari. 2018. Document context neural machine translation with memory networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1275– 1284, Melbourne, Australia. Association for Computational Linguistics. Sameen Maruf, Andr´e F. T. Martins, and Gholamreza Haffari. 2018. Contextual neural model for translating bilingual multi-speaker conversations. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 101–112, Brussels, Belgium. Association for Computational Linguistics. Sameen Maruf, Andr´e F. T. Martins, and Gholamreza Haffari. 2019. Selective attention for context-aware neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long and Short Papers), pages 3092–3102, Minneapolis, Minnesota, USA. Association for Computational Linguistics. Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2947–2954. Lesly Miculicich Werlen and Andrei Popescu-Belis. 2017. Validation of an automatic metric for the accuracy of pronoun translation (APT). In Proceedings of the Third Workshop on Discourse in Machine Translation, pages 17–25, Copenhagen, Denmark. Association for Computational Linguistics. Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017. Dynet: The dynamic neural network toolkit. arXiv preprint arXiv:1701.03980. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1715–1725. J¨org Tiedemann and Yves Scherrer. 2017. Neural machine translation with extended context. In Proceedings of the Third Workshop on Discourse in Machine Translation, pages 82–92, Copenhagen, Denmark. Association for Computational Linguistics. 5977 Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting gender right in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3003–3008, Brussels, Belgium. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine translation learns anaphora resolution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1264–1274, Melbourne, Australia. Association for Computational Linguistics. Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017. Exploiting cross-sentence context for neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2826–2831, Copenhagen, Denmark. Association for Computational Linguistics. Jiacheng Zhang, Huanbo Luan, Maosong Sun, Feifei Zhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018. Improving the transformer translation model with document-level context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 533–542, Brussels, Belgium. Association for Computational Linguistics. A Experiments A.1 Model Configuration We use similar configuration as the Transformerbase model (Vaswani et al., 2017) except that we reduce the number of layers in the encoder and decoder stack to 4 following Maruf et al. (2019). For training, we use the default Adam optimiser (Kingma and Ba, 2015) with an initial learning rate of 0.0001 and employ early stopping. A.2 English→Russian Experiments We wanted to compare the two variants of Transformer-HAN with k = +1 and k = -1 in the same experimental setting as done by Voita et al. (2018). The data they made available only contains the previous context sentence. Thus, we extract training, development and test sets following the procedure in this work but of roughly the same size as Voita et al. (2018) for a fair comparison of the two context settings. While they extract their test set as a random sample of sentences, we extract 1 2 3 4 5 6 7 8 9 10 0 10 20 30 %age of occurences Anaphora Cataphora (a) Europarl 1 2 3 4 5 6 7 8 9 10 0 10 20 30 %age of occurences Anaphora Cataphora (b) TED Talks Figure 3: Plots showing proportion of intersentential English pronouns versus size of coreference resolution window for Europarl and TED Talks corpora. from a random sample of documents, resulting in a test set which has document-level continuity between sentences. The pre-processing pipeline is the same as the one used for our English-German and English-Portuguese experiments except that we perform lowercasing (instead of truecasing) and learn separate BPE codes for source and target languages following Voita et al. (2018). We also evaluate the models trained with our training set on the test set provided by Voita et al. (2018) after removing the sentences overlapping with our train and dev sets (corpora statistics are in Table 7). Results Table 8 indicates that the model trained with the next sentence as context not only statistically significantly outperforms the Transformer baseline (+0.9 BLEU) but also demonstrates comparable performance to the HAN model trained Origin #Sentences Document length Voita et al. (2018) 2M/10K/10K Ours 2M/11K/10K 606.3/620.6/631.6 Our, Voita et al. (2018)⋆2M/11K/7.3K 606.3/620.6/Table 7: Train/dev/test statistics for English-Russian: number of sentences (K: thousands, M: millions), and average document length (in sentences). The first row mentions statistics of data used by Voita et al. (2018), the second row mentions statistics of data we extracted, and the third row mentions the data statistics for our train/dev sets and Voita et al. (2018)’s test after removing overlap (referred as Voita et al. (2018)⋆). 5978 Data Setting BaselineHAN(k = +1)HAN(k = -1) Ours 23.35 24.25 24.18 Our, Voita et al. (2018)⋆ 27.15 28.23 Table 8: BLEU on tokenised lowercased text for the Transformer baseline and Transformer-HAN with following sentence (k = +1) and previous sentence (k = -1) for English→Russian. All reported results for the HAN variants are statistically significantly better than the baseline. with the previous sentence. This finding is consistent with our main results. We also evaluate the model trained with our training data on Voita et al. (2018)⋆test set and report almost four points jump in the absolute BLEU score for both the baseline and the context-dependent model.15 In addition, we note that for their test set, the HAN (k = -1) has greater percentage improvement over the baseline (4%) than what they report for their context-aware model (2.3%). A.3 Cataphora-Focused Test Suite We segment the cataphora test set into the following subsets based on the part-of-speech of the postcedent being referred to: • DET Postcedents prefixed with possessive determiners, e.g., his son or their child. • PROPN Postcedents which are proper nouns, i.e., named entities. • NOUN Postcedents which are nouns, including proper nouns and common nouns, such as boy or child. A.3.1 Results for HAN (k = -1) We evaluate Transformer-HAN (k = -1) enriched with anaphoric context on the cataphora test set (Table 9) to determine if this context-aware model is making use of coreference information to improve the overall translation quality (in BLEU). We find that HAN (k = +1) performs better than HAN (k = -1) when English is in the target-side, which we hypothesise to be because of the extraction of the cataphora test suite from the Englishside. However, when English is in the source-side, both models perform comparably showing that the Tranformer-HAN (a context-aware NMT model) is able to make better use of coreference information to improve translation of pronouns. 15The BLEU score for the baseline on Voita et al. (2018)⋆ is less than the one reported in their original work because of the reduced size of the test set and the different training set. Lang. Pair Baseline HAN(k = -1) English→German 32.33 32.94 German→English 36.91 37.23 English→Portuguese 36.29 37.24 Portuguese→English 40.74 41.25 Table 9: BLEU on the cataphora test set for the Transformer and Transformer-HAN (k = -1). All results for HAN (k = -1) are statistically significantly better than the baseline. A.4 Pronoun Sets Language Pronouns English i, me, my, mine, myself, we, us, our, ours, ourselves, you, your, yours, yourself, yourselves, he, his, him, himself, she, her, hers, herself, it, its, itself, they, them, their, themselves, that, this, these, those, what, whatever, which, whichever, who, whoever, whom, whose German ich, du, er, sie, es, wir, mich, dich, sich, ihn, uns, euch, mir, dir, ihm, ihr, ihre, ihrer, ihnen, meiner, mein, meine, deiner, dein, seiner, sein, seine, unser, unsere, euer, euere, denen, dessen, deren, meinen, meinem, deinen, deinem, deines, unserer, unseren, unseres, unserem, ihrem, ihres, seinen, seinem, seines Portuguese eu, n´os, tu, vocˆe, vocˆes, ele, ela, eles, elas, me, te, nos, vos, o, lo, no, a, la, na, lhe, se, os, los, as, las, nas, lhes, mim, ti, si, meu, teu, seu, nosso,vosso, minha, tua, sua, nossa, vossa, meus teus, seus, nossos, vossos, minhas, tuas, suas nossas, vossas, dele, dela, deles, delas, quem que, qual, quais, cujo, cujos, cuja, cujas, onde Table 10: Pronoun sets used in our pronoun-focused automatic evaluation.
2020
530
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5979–5989 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5979 Improving Neural Machine Translation with Soft Template Prediction Jian Yang1 ∗, Shuming Ma2, Dongdong Zhang2, Zhoujun Li1 †, Ming Zhou2 1State Key Lab of Software Development Environment, Beihang University 2Microsoft Research Asia {jiaya, lizj}@buaa.edu.cn; {shumma, dozhang, mingzhou}@microsoft.com; Abstract Although neural machine translation (NMT) has achieved significant progress in recent years, most previous NMT models only depend on the source text to generate translation. Inspired by the success of template-based and syntax-based approaches in other fields, we propose to use extracted templates from tree structures as soft target templates to guide the translation procedure. In order to learn the syntactic structure of the target sentences, we adopt the constituency-based parse tree to generate candidate templates. We incorporate the template information into the encoder-decoder framework to jointly utilize the templates and source text. Experiments show that our model significantly outperforms the baseline models on four benchmarks and demonstrate the effectiveness of soft target templates. 1 Introduction Recently, neural machine translation (NMT) (Wu et al., 2016; Gehring et al., 2017; Vaswani et al., 2017; Chen et al., 2018) has achieved significant progress. Some advanced models (Chatterjee et al., 2016; Niehues et al., 2016; Junczys-Dowmunt and Grundkiewicz, 2017; Geng et al., 2018; Zhou et al., 2019a) predict the ultimate translation by multi-pass generation conditioned on the previous text such as CMLMs (Ghazvininejad et al., 2019), ABD-NMT (Zhang et al., 2018), SynST (Akoury et al., 2019), and Deliberation Network (Xia et al., 2017). Inspired by these works and the successful application of templates for other intriguing tasks, including semantic parsing (Dong and Lapata, 2018), summarization (Cao et al., 2018; Wang et al., 2019a), question answering (Duan et al., 2017; ∗Contribution during internship at Microsoft Research Asia. †Corresponding author. I like playing basketball 我 喜欢 打 篮球 Source S like VP Template Target Figure 1: An example of template guided translation results. S denotes subject and VP denotes verb phrase. Pandey et al., 2018), and other text generation tasks (Wiseman et al., 2018; Guu et al., 2018), we assume the candidate templates of the target sentences can guide the sentence translation process. We denote these templates extracted from the constituencybased parse tree as soft templates, which consist of tags and target words. The templates are soft because no explicit paradigms are inaugurated to build new translation from them, and the target tokens could be modified. In order to effectively use the templates, we introduce soft template-based neural machine translation (ST-NMT), which can use source text and soft templates to predict the final translation. Our approach can be split into two phases. In the first phase, a standard Transformer model is trained to predict soft target templates by using source text and templates extracted from the constituencybased parse tree. Secondly, we use two encoders, including a soft target template encoder and a source language encoder to encode source text and templates and generate the final translation. As shown in Figure 1, given the source text “我喜欢 打篮球” and the target template “S like to VP”, the final translation “I like to play basketball” is generated by two encoders. In this work, the templates play a part in guiding, and some target tokens in 5980 Self Attention Add & Norm Add & Norm Cross Attention Add & Norm Feed Forward Self Attention Add & Norm Add & Norm Feed Forward Self Attention Add & Norm Add & Norm Feed Forward Position Encoding Position Encoding Position Encoding Linear & Softmax Template Encoder Source Encoder Source Embedding Template Embedding Target Embedding x1 x4 x2 x3 x5 𝑡1 𝑡4 𝑦2 ′ t3 𝑡5 𝑦1 𝑦2 𝑦3 𝑦5 𝑦4 Target Decoder N× N× N× Figure 2: Overview of our ST-NMT. Given the source text and the soft target template predicted by the PθX→Y , the source language Transformer encoder and target template Transformer encoder maps two sequences X = (x1, x2, x3, x4, x5) and T = (t1, y′ 2, t3, t4, t5) into hidden states ZX and ZT . xi denotes the source word, ti denotes the template tag and yi denotes the target word. y′ i also denotes the target word but it can be modified to the other target words. The ultimate translation Y is generated by a Transformer decoder which incorporates the context ZX and ZY in the second phase. the template could also be modified. In order to prove the effectiveness of our approach, we conduct main experiments on the popular benchmarks, including IWSLT14 GermanEnglish translation task, WMT14 English-German translation task, LDC Chinese-English translation task, and ASPEC Japanese-Chinese translation task. Experiments show that our approach achieves significant improvement compared to the baselines, which demonstrates the soft target templates can provide a positive influence for guiding translation procedure effectively. Our approach can be used for diverse scale data sets, different styles, and multiple language pairs. 2 Our Approach Our model first reads the source language sequence X = (x1, x2, x3, . . . , xn) in the conventional way by a source language Transformer encoder and generates the template sequence T = (t1, t2, t3, . . . , tm) by a template Transformer decoder. As shown in Figure 2, our model uses a source language Transformer encoder and a template Transformer encoder, which encodes the source language sequence X and the template sequence T separately. We deploy the target language decoder to generate the final translation. In this section, we present the details of the proposed template-based approach. Our method mainly includes two phases: (1) The training data is constructed by the constituency-based parse tree. Then, we adopt a standard Transformer to convert the source text to the soft target template for the next generation. (2) Based on the source text and the predicted soft target template, we utilize two encoders to encode two sequences into hidden states separately and a target language decoder to generate the ultimate translation. 2.1 Soft Template Prediction In this procedure, we model the PθX→T (T|X) to predict soft target templates on top of the constructed training data DX,T . To construct DX,T , we use a constituency-based parser to parse the target sequence and get a tree structure. Then, we prune nodes which are deeper than the specific depth and recover the left leaf nodes to the ordered template sequence. Through these operations, we gain the parallel training data DX,T and train a standard Transformer model PθX→T (T|X) to predict the soft target template. The constituency-based parse tree could reveal the structure information of the whole sentence which utilizes the constituency grammar to distinguish terminal and non-terminal nodes. More specifically, the interior nodes are labeled by nonterminal categories which belong to the set of nonterminal tokens S, while the leaf nodes are labeled 5981 Pruned Figure 3: The constituency-based parse tree of the example sentence. Given the target sentence and definite depth of the tree, we gain the sub-tree by pruning the nodes deeper than 4 in this case. Then, the sub-tree can be converted to the soft target template “There are NP VP” from left to right. by terminal categories V . S = {S, VP, NP, ..., ASBR} and V is the vocabulary set of the target language. For example, the sentence “There are some people running” could be expressed as Figure 3. In this case, the non-terminal tokens consist of S0 = {S, NP, VP, EX, VBP, NP, DT, NNS, VBG} and the terminal tokens are composed of V0 = {There, are some, people, running}. Our template T = {t1, t2, t3, t4} is the ordered sequence which is composed of non-terminal tokens and terminal tokens. In this case, t1=There, t2=are, t3=VP and t4=NP. Our template extraction aims to extract the sub-tree of the specific depth and use these nonterminal and terminal tokens locating at the leaf node of sub-tree. In order to predict the soft target templates, we train a standard Transformer model given the training data of the source text and extracted templates. The Transformer model reads the source text and predicts the soft target templates using beam search. Then, we select the top-K results of the beam search as templates. The depth of the tree is a trade-off. In Figure 3, One special case is that when the depth equals 1, the template only has one symbol “S”. The template “S” cannot provide any useful information. Another special case is that when depth is greater than 6, the template “There are some people running” only has terminal tokens. The template only contains target words, which cannot provide any additional information. When the depth equals 4, the template is “There are NP VP”. The template contains sentence syntactic and structural information, which is suitable for our method. With the Transformer model PθX→T (T|X), we need to construct the pseudo training data DX,T,Y instead of directly using extracted templates by soft template prediction. Given the source text X, we use PθX→T (T|X) to predict the top-1 soft target template T with beam search. Therefore, we get the triple training data DX,T,Y = {X(i), T (i), Y (i)}N i=1 which is prepared for the next phase. 2.2 Machine Translation via Soft Templates The triple training data DX,T,Y is used to model the probability P(X,T)→Y from the two sequences to the ultimate translation. Our approach could generate the target sentence Y , given the source sequence X and template T. Formulation In formula, we could model the whole procedure on top of the PθX→T (T|X) and Pθ(X,T )→Y (Y |X, T). P(Y |X) = PθX→T (T|X)Pθ(X,T )→Y (Y |X, T) (1) where θX→T and θ(X,T)→Y are the parameters for the first and the second phase. The source language Transformer encoder and the soft template Transformer encoder maps the input sequence X and the template T composed of target language words and tags to the hidden states. Then, a Transformer decoder interacting with two encoders generates the final translation Y , described by the Equation 1. Encoder In the second phase, our template Transformer encoder and the source language Transformer encoder are stacked by blocks which contain self-attention layers with residual connections, layer normalization and fully connected feedforward network (FFN). Therefore, the hidden states of the source language Transformer encoder and the template Transformer encoder are calculated by: hl = TransformerBlock(hl−1) (2) 5982 where hl = hX l for the source language Transformer encoder and hl = hT l for the template Transformer encoder. N is the number of layers and l ∈[1, N]. Decoder Based on the hidden states hX l and hT l , the target language Transformer decoder use the encoder-decoder multi-head attention to jointly use the source language and template information to generate the ultimate translation Y . Besides, the target sequence decoder uses multi-head attention to obtain the representations of target language decoder with the parameters (W Q X , W K X , W V X ) and (W Q T , W K T , W V T ) for different encoders. In each attention head, the input sequence X = (x1, . . . , xm) and the template T = (t1, . . . , tn) can be mapped into ZX = (zX 1 , zX 2 , . . . , zX m) and ZT = (zT 1 , zT 2 , . . . , zT n ) using the source language Transformer encoder and the template Transformer encoder. On top of the ZX and ZT , the decoder separately calculate the multi-head attention with source sentence context X = (x1, . . . , xm) and target template sentence T = (t1, . . . , tn), then our model obtain two hidden states ZX,Y and ZT,Y by attention with source context and template context. Here, We incorporate the ZX,Y containing source language information and ZX,Y including template information in a reasonable way: Z = βZX,Y + (1 −β)ZT,Y (3) where β is the parameter to control the degree of incorporation between source text and template. In order to effectively incorporate source and template information, we calculate the parameter β as below: β = σ(WY ZX,Y + UTZX,T) (4) where ZY is the decoder hidden state and WY and UT are parameter matrices. σ is the sigmoid activation function. 2.3 Training Strategy Similar to the conventional NMT, in order to make the model predict the target sequence, we use maximum likelihood estimation (MLE) loss function to update the model parameter by maximizing the log likelihood of translation over training set D. When we train the PθX→Y without the template Transformer encoder, we only need to optimize the following loss function: LθX→Y (D) = X X,Y ∈D log PθX→Y (Y |X) (5) where θX→Y are the parameters of the source language Transformer encoder and the target language Transformer decoder. When we train the Pθ(X,T )→Y with the template Transformer encoder, the loss function could be calculated by: Lθ(X,T )→Y (D) = X X,Y ∈D log Pθ(X,T )→Y (Y |X, T) (6) where θ(X,T)→Y are the parameters of the source language Transformer encoder, template language Transformer encoder and target language Transformer decoder. To balance the two objectives, our model is trained on LθX→Y (D) objective for the α% iterations, and trained on Lθ(X,T )→Y (D) objective for the (1−α)% interations. Therefore, this procedure is equivalent to the following formula: Lθ(D) = αLθX→Y (D) + (1 −α)Lθ(X,T )→Y (D) (7) where α is a scaling factor accounting for the difference in magnitude between LθX→Y (D) and Lθ(X,T )→Y (D). In practice, we find optimizing these two objectives can make training procedure easier and get a higher BLEU score since there exist a few low-quality templates to influence the translation quality. Through optimizing two objectives simultaneously, we can reduce the effect of some lowquality templates and improve the stability of our model. 3 Experiments We conducted experiments on four benchmarks, including LDC Chinese-English, WMT14 EnglishGerman, IWSLT14 German-English, and ASPEC Japanese-Chinese translation tasks. By conducting experiments on these four benchmarks, these settings prove that our approach is suitable for diverse situations: (1) These four benchmarks provide a wide coverage of both scale and genres. They vary from small scale to large scale (2) We use the different domains, which include news, science, and talk domain. (3) We also conduct the experiments 5983 on different language pairs, including the GermanEnglish translation task, the English-German translation task, the Chinese-English translation task, and the Japanese-Chinese translation task. 3.1 Datasets In order to verify the effectiveness of our method, we conduct experiments on four benchmarks. WMT14 and LDC datasets are from the news domain. IWSLT14 dataset is from TED talk. ASPEC dataset is from a scientific paper excerpt corpus. LDC Chinese-English We use a subset from LDC corpus1 which has nearly 1.4M sentences originally. The training set is selected from the LDC corpus that consists of 1.2M sentence pairs after dropping the low-quality sentence pairs of which the length is more than 2. We used the NIST 2006 dataset as the validation set for evaluating performance in the training procedure, and NIST 2003, 2005, 2008 and 2012 as test sets, which all have 4 English references for each Chinese sentence. IWSLT14 German-English This dataset contains 16K training sequence pairs. We randomly sample 5% of the training data as valid test. Besides, we merge the multiple testsets dev2010, dev2012, tst2010, tst2011, tst2012 for testing. WMT14 English-German The training data consists of 4.5M sentence pairs. The validation set is devtest2014, and the test set is newstest2014. ASPEC Japanese-Chinese We use 0.67M sentence pairs from ASPEC Japanese-Chinese corpus (Nakazawa et al., 2016) 2. We use the devtest as the development data, which contains 2090 sentences, and the test data contains 2107 sentences with a single reference per source sentence. 3.2 Preprocessing and Training Details LDC Chinese-English The base Transformer model is used for this task, which includes 6 layers, each layer of which has the hidden dimensions of 512, feedforward dimensions of 2048 , and 8 attention heads. We use Moses (Koehn et al., 2007) to tokenize English sentences and our in-house tool to tokenize Chinese sentences. We use Byte Pair Encoding (BPE) (Sennrich et al., 2016) to encode 1LDC2002E17, LDC2002E18, LDC2003E07, LDC2003E14, LDC2005E83, LDC2005T06, LDC2005T10, LDC2006E17, LDC2006E26, LDC2006E34, LDC2006E85, LDC2006E92, LDC2006T06, LDC2004T08, LDC2005T10 2http://orchid.kuee.kyoto-u.ac.jp/ASPEC/ sentences using a shared vocabulary of 40K symbols. IWSLT14 German-English We adopt the small setup of the Transformer model. The model has 6 layers with the embedding size of 512, a feedforward size of 1024, and 4 attention heads. In order to prevent overfitting, we use a dropout of 0.3, a l2 weight decay of 10−4, and a label smoothing of 0.1. We use BPE to encode sentences with a shared vocabulary of 10K symbols. WMT14 English-German We use the big setting of Transformer (Vaswani et al., 2017), in which both the encoder and the decoder have 6 layers, with the embedding size of 1024, feedforward size of 4096, and 16 attention heads. The dropout rate is fixed as 0.3. We adopt Adam (Kingma and Ba, 2015) optimizer with a learning rate 0.1 of the similar learning rate schedule as Transformer (Vaswani et al., 2017). We set the batch size as 6000 and the update frequency as 16 on 8 GPUs for updating parameters (Ott et al., 2018) to imitate 128 GPUs. The datasets are encoded by BPE with a shared vocabulary (Sennrich et al., 2016) of 40K symbols. ASPEC Japanese-Chinese We use the base setting of Transformer the same to the ChineseEnglish translation task. Following the similar learning rate schedule (Vaswani et al., 2017), we set the learning rate as 0.1. Chinese and Japanese sentences are tokenized with our in-house tools and encoded by BPE with a shared vocabulary of 10K symbols. 3.3 Evaluation We evaluate the performance of the translation results. The evaluation metric is BLEU (Papineni et al., 2002). For the Chinese-English and GermanEnglish translation tasks, we use case-insensitive tokenized BLEU scores. For the English-German translation task, we use case-sensitive tokenized BLEU scores for evaluation. All the experiments last for 150 epochs and use Stanford parser to generate templates (Manning et al., 2014). For all translation tasks, we use the checkpoint, which has the best valid performance on the valid set. For different test sets, we adapt the beam size and the length penalty to get better performance. In order to avoid the difference of the tokenizer for Chinese translation result evaluation, we adopt the character-level BLEU for testing. Checkpoint averaging is not used, except notification. 5984 Zh →En MT06 MT03 MT05 MT08 MT12 Avg. ConvS2S (Gehring et al., 2017) 39.98 42.25 41.22 33.43 32.21 37.28 GNMT (Wu et al., 2016) 40.53 42.88 42.73 33.97 32.55 38.03 Transformer (our implementation) 43.60 45.80 44.52 36.62 34.60 40.39 ST-NMT (our proposed) 44.69 46.56 46.04 37.53 35.99 41.53 Table 1: Evaluation results on Zh →En translation task with BLEU% metric. The “Avg.” column means the averaged result of all NIST test sets except NIST2006. The result of our model is statistically significant compared to the other baselines (p < 0.01). 3.4 Baselines We compare our approach with two types of baselines including one-pass baselines and multi-pass baselines. One-pass Baselines: ConvS2S (Gehring et al., 2017) is a strong CNN-based baseline. We report the results referring to the paper of convolutional sequence to sequence model (ConvS2S). RNMT+ (Chen et al., 2018) is a state-of-the-art RNN-based NMT model. GNMT (Wu et al., 2016) is the typical encoder-decoder framework. We use the similar setting3 for all experiments. Transformer (Vaswani et al., 2017) is a strong baseline which has the state-of-the-art performance. We reimplement this baseline4. LightConv and DynamicConv (Wu et al., 2019) are simpler but effective baselines. We directly report the results in the paper. Multi-pass Baselines: Deliberation network (Xia et al., 2017) and SoftPrototype (Wang et al., 2019b) generates and polishes the raw text by a two-pass manner. SB-NMT (Zhou et al., 2019a) is a synchronous bidirectional neural machine translation which predicts its outputs using two direction simultaneously. ABD-NMT (Zhang et al., 2018) is an encoder-decoder NMT framework with the forward and backward decoder. By considering the agreement of both directions left-to-right (L2R) and right-to-left (R2L), Rerank-NMT (Liu et al., 2016) rescores all candidates. SBSG (Zhou et al., 2019b) is a synchronous bidirectional sequence generation model which predicts its translation from both sides to the middle simultaneously. Insertion Transformer (Stern et al., 2019) is a nonmonotonic method which predicts the translation 3https://github.com/NVIDIA/DeepLearningExamples/tree/ master/PyTorch/Translation/GNMT 4https://github.com/pytorch/fairseq De →En BLEU GNMT (Wu et al., 2016) 31.44 RNMT+ (Chen et al., 2018) 34.51 ConvS2S (Gehring et al., 2017) 30.41 LightConv (Wu et al., 2019) 34.80 DynamicConv (Wu et al., 2019) 35.20 Rerank-NMT (Liu et al., 2016) 34.82 Transformer (our implementation) 34.43 ST-NMT (our proposed) 35.24 Table 2: BLEU-4 scores (%) on IWSLT14 De→En task. The result of our model is statistically significant compared to the other baselines (p < 0.05). by inserting method. 3.5 Results For the IWSLT14 German-English machine translation task, we present the results of the ST-NMT and other strong baselines in Table 2. We compare our method with other various methods, including GNMT, RNMT+, convS2S, LightConv, DynamicConv, and the Transformer model with the small setting. The Rerank-NMT model gets 34.82 BLEU by using the two-pass results, including leftto-right (L2R) and right-to-left (R2L), and selects the best candidates. As shown in Table 2, our model also significantly outperforms others and gains an improvement of 0.81 BLEU points than a strong Transformer baseline model. Moreover, our method outperforms the GNMT by 3.80 BLEU points, ConvS2S by 4.83 BLEU, LightConv by 0.44 BLEU, Dynamic by 0.04 BLEU and RerankNMT by 0.42 BLEU. We secondly evaluate our method on the LDC Chinese-English translation task. The evaluation results on all NIST test sets against baselines are listed in Table 1. Our ST-NMT beats the other 5985 En →De BLEU GNMT (Wu et al., 2016) 24.61 ConvS2S (Gehring et al., 2017) 25.16 Transformer (Vaswani et al., 2017) 28.40 RNMT+ (Chen et al., 2018) 28.49 Rerank-NMT (Liu et al., 2016) 27.81 ABD-NMT (Liu et al., 2016) 28.22 Deliberation Network (Xia et al., 2017) 29.11 SoftPrototype (Wang et al., 2019b) 29.46 SB-NMT (Zhou et al., 2019a) 29.21 SBSG (Zhou et al., 2019b) 27.45 Insertion Transformer (Stern et al., 2019) 27.41 Transformer (our implementation) 29.25 ST-NMT (our proposed) 29.68 Table 3: BLEU-4 scores (%) on WMT14 En→De task. The result of our model is statistically significant compared to the other baselines (p < 0.05). Ja →Zh BLEU GNMT (Wu et al., 2016) 49.12 ConvS2S (Gehring et al., 2017) 50.32 Transformer (our implementation) 52.02 ST-NMT (our proposed) 52.84 Table 4: Character-level BLEU-4 scores (%) on ASPEC Ja→Zh task. The result of our model is statistically significant compared to the other baselines (p < 0.01). baselines and outperforms the Transformer baseline by 1.14 BLEU point on average, which shows that the template could effectively improve the performance. More specifically, our model outperforms the Transformer model by 0.76 BLEU on NIST2003, 1.52 BLEU on NIST 2005, 0.91 BLEU on NIST 2008, and 1.39 BLEU on NIST 2012. We further demonstrate the effectiveness of our model on WMT14 English-German translation tasks, and we also compare our model with other competitive models, including ABD-NMT (Zhang et al., 2018), Deliberation Network (Xia et al., 2017), SoftPrototype (Wang et al., 2019b), SBNMT (Zhou et al., 2019a) and SBSG (Zhou et al., 2019b). As shown in Table 3, our model also significantly outperforms others and gets an improvement of 0.43 BLEU points than a strong Transformer model. To investigate the effect of our approach on the different language pairs, we also evaluate 1 2 3 4 5 6 7 8 The number of templates 29.2 29.3 29.4 29.5 29.6 29.7 BLEU 29.68 29.54 29.44 29.48 29.62 29.55 29.34 29.22 ST-NMT Figure 4: The effect of the multiple templates. We feed the the top-K results of the beam search as multiple templates and source sentence to generate the target translation. our model on the Japanese-Chinese translation task. According to Table 4, ST-NMT outperforms GNMT by 3.72 BLEU points, ConvS2S by 2.52 BLEU points, and the Transformer model by 0.82 BLEU points, which demonstrates that the soft template extracted by constituency-based parse tree can also bring strong positive effects. 3.6 Multiple Templates Because of the diversity of the templates, we investigate the performance with the different numbers of the templates. On top of the original parallel training data D = {(x(i), y(i))}N i=1, we construct the training data from the source text to the soft target template DX→T = {(x(i), t(i))}N i=1, by the model PθX→T . Through this construction procedure, we could use the top-K results of the beam search as multiple templates by model PθX→T . We could expand the training data of the source text to the target template as DX→T = {(x(1), t(1) top1), . . . , (x(1), t(1) topK), . . . , (x(N), t(N) top1), . . . , (x(N), t(N) topK)}. As shown in Figure 4, our model gains the best performance only using the single template. When the number of templates is 8, our model gains the worst BLEU score of 29.22. We can summarize that our model can be more robust but maybe get worse performance with the number of templates rising. Besides, in order to further improve the stability of our model, we expand the dataset by selecting random templates for the source sentence. The different templates confuse our model, although it can make our model more robust. 5986 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Alpha(%) 29.0 29.1 29.2 29.3 29.4 29.5 29.6 29.7 BLEU 28.95 29.03 29.12 29.44 29.68 29.41 29.32 29.45 29.48 ST-NMT Figure 5: We use two objective to update our model parameters. One objective only use source sentence and another utilizes source and soft template sentences together to generate the final translation. The hyperparameter α is used to balance the two objectives, 3.7 Balance of Two Objectives To further control how much our model leverages templates for translation, we tune the hyperparameter α. With the value rising, the contribution of template information gradually decreases. We study the influence of the ratio α. To investigate the effect of this hyperparameter, we set the discrete value α = {10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, 100%}. According to Figure 5, when the α switches from 0.4 to 0.9, our model can get the better performance which is greater than or equal to 29.3 BLEU. The results show that we can set the hyper-parameter α in a reasonable interval (0.4 ≤α ≤0.9) to keep the balance between source text and template. 3.8 Depth of Parsing Tree Considering that the template derived from the specific depth can lead to the divergent performance, our model is examined with the different depth. The effect of the template extraction which is described as Section 3 is decided by the sub-tree which is controlled by the depth of sub-tree. For the same constituency-based parse tree, the different sub-tree can be obtained based on the different chosen depth d. When we get the sub-tree, the template could be derived from it. The depth of the constituency-based parse tree is decided by a simple but effective strategy as formula: d = min(max(L × λ, γ1), γ2) (8) where L is the length of the input sentence, γ1 is the lower bound, γ2 is the upper bound depth λ MT03 MT05 MT08 MT12 0.10 45.92 45.01 36.55 35.34 0.15 46.56 46.04 37.53 35.99 0.20 46.02 45.20 37.08 35.82 0.25 46.27 44.83 36.88 35.64 0.30 46.08 45.02 36.72 35.54 0.35 46.22 44.92 36.84 35.51 0.40 46.32 45.40 36.94 35.61 Table 5: The results of the different depth on NIST2003, NIST2005, NIST2008 and NIST2012. λ MT03 MT05 MT08 MT12 0.15 79.4 81.6 78.6 77.6 Table 6: The ratio(%) of overlapping words between the predicted soft target template and the translation on NIST2003, NIST2005, NIST2008 and NIST2012. of the sub-tree and λ is the ratio of the length of source sentence. When the λ approximates 1.0, the template contains more target tokens and less tags. In addition, we tune the depth on the LDC training data and list the results. According to the Table 5, the soft templates of the specific depth provide helpful information to the translation procedure when the λ = 0.15 in the LDC dataset. 3.9 Ratio of Overlapping Words To measure contribution of the predicted soft target template for final translation, we calculate the overlapping words between the template and the translation. Table 6 gives the specific overlapping words ratio on the different test sets including NIST2003, NIST2005, NIST2008 and NIST2012. The overlapping ratio is calculated by the following formula: ratio = P w∈T min (County(w), Countt(w)) P w∈T Countt(w) (9) where County(·) and Countt(·) denote the number of w in the target translation Y and the template T, and w is the words in the target language. The overlapping ratio represents the correlation between the predicted template T and the target translation Y . According to Table 6, the correlation between the template T and the translation Y is highly relevant which demonstrates the contribution of our template to the final translation. 5987 Source 另一方面, 如果我们反应过度, 将会被他们欺骗. Reference on the other hand , if we overreact , we will be deceived by their trick . Template on the other hand , if NP VP , we will VP . Ours on the other hand , if we react too much , we will be hit by them . Table 7: A Chinese-English translation example of our proposed method. VP and NP represent non-terminal nodes in the constituency-based parse tree. 3.10 Example Study To further illustrate which aspects of NMT are improved by the target soft template, we provide a Chinese-English translation example shown in 7. Templates provide the structural and grammatical information of the target sentence. For instance, Chinese source sentence “另一方面, 如果我们 反应过度, 将会被他们欺骗”, our model first predicts the target template “on the other hand , if NP VP , we will VP ”, and then generate the final translation “on the other hand , if we react too much, we will be hit by them”. Our target template provides the sentence pattern “If sb. do sth, sb. will be done”. Our method introduces the constituency-based parse tree and utilizes the constituency grammar to distinguish terminal and non-terminal nodes. Therefore, our model can automatically learn sentence patterns, including grammatical and structural information. 4 Related Work Many types of encoder-decoder architecture (Bahdanau et al., 2015; Wu et al., 2016; Gehring et al., 2017; Vaswani et al., 2017; Chen et al., 2018) have been proposed in the past few years. Furthermore, Transformer enhances the capability of NMT in capturing long-distance dependencies based on these backbone models, including CNN-based, RNN-based, and Transformer based architecture. To improve the quality of the translation, many authors have endeavored to adopt multi-pass generation decoding method, their models first predict the rough translation and then generate the final translation based on the previous draft (Niehues et al., 2016; Chatterjee et al., 2016; JunczysDowmunt and Grundkiewicz, 2017; Xia et al., 2017; Geng et al., 2018; Wang et al., 2019b). Besides, some works (Liu et al., 2016; Zhang et al., 2018; Zhou et al., 2019b,a) use the right-toleft (R2L) and left-to-right (L2R) to improve the quality of machine translation. Non-Autoregressive decoding (Ghazvininejad et al., 2019) first predicts the target tokens and masked tokens, which will be filled in the next iterations. Then, the model predicts the unmasked tokens on top of the source text and a mixed translation consisting of the masked and unmasked tokens. Semi-autoregressive also (Akoury et al., 2019) predicts chunked fragments or the unmasked tokens based on the tree structure before the final translation. In addition, there are many existing works (Eriguchi et al., 2016; Aharoni and Goldberg, 2017; Wu et al., 2017; Wang et al., 2018; Dong and Lapata, 2018; Wang et al., 2018; Gu et al., 2018) which incorporate syntax information or the tree structure into NMT to improve the quality of translation results. 5 Conclusion In this work, we propose a novel approach that utilizes source text and additional soft templates. More specifically, our approach can extract the templates from the sub-tree, which derives from the specific depth of the constituency-based parse tree. Then, we use a Transformer model to predict the soft target templates conditioned on the source text. On top of soft templates and source text, we incorporate the template information to guide the translation procedure. We compare our soft-template neural machine translation (ST-NMT) with other baselines on four benchmarks and multiple language pairs. Experimental results show that our ST-NMT significantly improves performance on these datasets. Acknowledgments This work was supported in part by the National Natural Science Foundation of China (Grant Nos.U1636211, 61672081,61370126), the Beijing Advanced Innovation Center for Imaging Technology (Grant No.BAICIT2016001), and the Fund of the State Key Laboratory of Software Development Environment (Grant No.SKLSDE2019ZX-17). 5988 References Roee Aharoni and Yoav Goldberg. 2017. Towards string-to-tree neural machine translation. In ACL 2017, pages 132–140. Nader Akoury, Kalpesh Krishna, and Mohit Iyyer. 2019. Syntactically supervised transformers for faster neural machine translation. In ACL 2019, pages 1269–1281. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR 2015. Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018. Retrieve, rerank and rewrite: Soft template based neural summarization. In ACL 2018, pages 152– 161. Rajen Chatterjee, Jos´e G. C. de Souza, Matteo Negri, and Marco Turchi. 2016. The FBK participation in the WMT 2016 automatic post-editing shared task. In WMT 2016, pages 745–750. Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Mike Schuster, Noam Shazeer, Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Zhifeng Chen, Yonghui Wu, and Macduff Hughes. 2018. The best of both worlds: Combining recent advances in neural machine translation. In ACL 2018, pages 76–86. Li Dong and Mirella Lapata. 2018. Coarse-to-fine decoding for neural semantic parsing. In ACL 2018, pages 731–742. Nan Duan, Duyu Tang, Peng Chen, and Ming Zhou. 2017. Question generation for question answering. In EMNLP 2017, pages 866–874. Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-sequence attentional neural machine translation. In ACL 2016. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In ICML 2017, pages 1243–1252. Xinwei Geng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2018. Adaptive multi-pass decoder for neural machine translation. In EMNLP 2018, pages 523– 532. Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel decoding of conditional masked language models. In EMNLP 2019, pages 6111–6120. Jetic Gu, Hassan S. Shavarani, and Anoop Sarkar. 2018. Top-down tree structured decoding with syntactic connections for neural machine translation and parsing. In EMNLP 2018, pages 401–413. Kelvin Guu, Tatsunori B. Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. TACL, 6:437–450. Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2017. An exploration of neural sequence-tosequence architectures for automatic post-editing. In IJCNLP 2017, pages 120–129. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR 2015. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL 2007, pages 177–180. Lemao Liu, Masao Utiyama, Andrew M. Finch, and Eiichiro Sumita. 2016. Agreement on targetbidirectional neural machine translation. In NAACL 2016, pages 411–416. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In ACL 2014, pages 55–60. Toshiaki Nakazawa, Manabu Yaguchi, Kiyotaka Uchimoto, Masao Utiyama, Eiichiro Sumita, Sadao Kurohashi, and Hitoshi Isahara. 2016. Aspec: Asian scientific paper excerpt corpus. In LREC 2016. Jan Niehues, Eunah Cho, Thanh-Le Ha, and Alex Waibel. 2016. Pre-translation for neural machine translation. In COLING 2016, pages 1828–1836. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In WMT 2018, pages 1–9. Gaurav Pandey, Danish Contractor, Vineet Kumar, and Sachindra Joshi. 2018. Exemplar encoder-decoder for neural conversation generation. In ACL 2018, pages 1329–1338. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL 2002, pages 311–318. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL 2016, pages 1715–1725. Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. 2019. Insertion transformer: Flexible sequence generation via insertion operations. In ICML 2019, pages 5976–5985. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS 2017, pages 5998–6008. 5989 Kai Wang, Xiaojun Quan, and Rui Wang. 2019a. Biset: Bi-directional selective encoding with template for abstractive summarization. In ACL 2019, pages 2153–2162. Xinyi Wang, Hieu Pham, Pengcheng Yin, and Graham Neubig. 2018. A tree-based decoder for neural machine translation. In EMNLP 2018, pages 4772– 4777. Yiren Wang, Yingce Xia, Fei Tian, Fei Gao, Tao Qin, Cheng Xiang Zhai, and Tie-Yan Liu. 2019b. Neural machine translation with soft prototype. In NIPS 2019, pages 6313–6322. Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2018. Learning neural templates for text generation. In EMNLP 2018, pages 3174–3187. Felix Wu, Angela Fan, Alexei Baevski, Yann N. Dauphin, and Michael Auli. 2019. Pay less attention with lightweight and dynamic convolutions. In ICLR 2019. Shuangzhi Wu, Dongdong Zhang, Nan Yang, Mu Li, and Ming Zhou. 2017. Sequence-to-dependency neural machine translation. In ACL 2017, pages 698–707. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. Yingce Xia, Fei Tian, Lijun Wu, Jianxin Lin, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2017. Deliberation networks: Sequence generation beyond one-pass decoding. In NIPS 2017, pages 1784–1794. Xiangwen Zhang, Jinsong Su, Yue Qin, Yang Liu, Rongrong Ji, and Hongji Wang. 2018. Asynchronous bidirectional decoding for neural machine translation. In AAAI 2018, pages 5698–5705. Long Zhou, Jiajun Zhang, and Chengqing Zong. 2019a. Synchronous bidirectional neural machine translation. TACL, 7:91–105. Long Zhou, Jiajun Zhang, Chengqing Zong, and Heng Yu. 2019b. Sequence generation: From both sides to the middle. In IJCAI 2019, pages 5471–5477.
2020
531
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5990–5997 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5990 Tagged Back-translation Revisited: Why Does It Really Work? Benjamin Marie Raphael Rubino Atsushi Fujita National Institute of Information and Communications Technology 3-5 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0289, Japan {bmarie, raphael.rubino, atsushi.fujita}@nict.go.jp Abstract In this paper, we show that neural machine translation (NMT) systems trained on large back-translated data overfit some of the characteristics of machine-translated texts. Such NMT systems better translate humanproduced translations, i.e., translationese, but may largely worsen the translation quality of original texts. Our analysis reveals that adding a simple tag to back-translations prevents this quality degradation and improves on average the overall translation quality by helping the NMT system to distinguish back-translated data from original parallel data during training. We also show that, in contrast to high-resource configurations, NMT systems trained in lowresource settings are much less vulnerable to overfit back-translations. We conclude that the back-translations in the training data should always be tagged especially when the origin of the text to be translated is unknown. 1 Introduction During training, neural machine translation (NMT) can leverage a large amount of monolingual data in the target language. Among existing ways of exploiting monolingual data in NMT, the so-called back-translation of monolingual data (Sennrich et al., 2016a) is undoubtedly the most prevalent one, as it remains widely used in state-of-the-art NMT systems (Barrault et al., 2019). NMT systems trained on back-translated data can generate more fluent translations (Sennrich et al., 2016a) thanks to the use of much larger data in the target language to better train the decoder, especially for low-resource conditions where only a small quantity of parallel training data is available. However, the impact of the noisiness of the synthetic source sentences generated by NMT largely remains unclear and understudied. Edunov et al. (2018) even showed that introducing synthetic noise in back-translations actually improves translation quality and enables the use of a much larger quantity of back-translated data for further improvements in translation quality. More recently, Caswell et al. (2019) empirically demonstrated that adding a unique token at the beginning of each back-translation acts as a tag that helps the system during training to differentiate back-translated data from the original parallel training data and is as effective as introducing synthetic noise for improving translation quality. It is also much simpler since it requires only one editing operation, adding the tag, and non-parametric. However, it is not fully understood why adding a tag has such a significant impact and to what extent it helps to distinguish back-translated data from the original parallel data. In this paper, we report on the impact of tagging back-translations in NMT, focusing on the following research questions (see Section 2 for our motivation). Q1. Do NMT systems trained on large backtranslated data capture some of the characteristics of human-produced translations, i.e., translationese? Q2. Does a tag for back-translations really help differentiate translationese from original texts? Q3. Are NMT systems trained on back-translation for low-resource conditions as sensitive to translationese as in high-resource conditions? 2 Motivation During the training with back-translated data (Sennrich et al., 2016a), we can expect the NMT system to learn the characteristics of back-translations, i.e., translations generated by NMT, and such characteristics will be consequently exhibited at test time. However, translating translations is a rather artificial task, whereas users usually want to perform translation of original texts. Nonetheless, many 5991 of the test sets used by the research community for evaluating MT systems actually contain a large portion of texts that are translations produced by humans, i.e., translationese. Translationese texts are known to be much simpler, with a lower mean sentence length and more standardized than original texts (Laviosa-Braithwaite, 1998). These characteristics overlap with those of translations generated by NMT systems that have been shown simpler, shorter, and to exhibit a less diverse vocabulary than original texts (Burlot and Yvon, 2018). These similarities raise Q1. Caswell et al. (2019) hypothesized that tagging back-translations helps the NMT system during training to make some distinction between the backtranslated data and the original parallel data. Even though the effectiveness of a tag has been empirically demonstrated, the nature of this distinction remains unclear. Thus, we pose Q2. The initial motivation for back-translation is to improve NMT for low-resource language pairs by augmenting the training data. Therefore, we verify whether our answers to Q1 and Q2 for highresource conditions are also valid in low-resource conditions, answering Q3. 3 Experiments 3.1 Data As parallel data for training our NMT systems, we used all the parallel data provided for the shared translation tasks of WMT191 for English– German (en-de), excluding the Paracrawl corpus, and WMT152 for English–French (en-fr).3 As monolingual data for each of English, German, and French to be used for back-translation, we concatenated all the News Crawl corpora provided by WMT, and randomly extracted 25M sentences. For our simulation of low-resource conditions, we randomly sub-sampled 200k sentence pairs from the parallel data to train NMT systems and used these systems to back-translate 1M sentences randomly sub-sampled from the monolingual data. For validation, i.e., selecting the best model after training, we chose newstest2016 for en-de and newstest2013 for en-fr, since they are rather balanced on their source side between translationese and original texts. For 1http://www.statmt.org/wmt19/ translation-task.html 2http://www.statmt.org/wmt15/ translation-task.html 3After pre-processing and cleaning, we obtained 5.2M and 32.8M sentence pairs for en-de and en-fr, respectively. evaluation, since most of the WMT test sets are made of both original and translationese texts, we used all the newstest sets, from WMT10 to WMT19 for en-de, and from WMT08 to WMT15 for en-fr.4 All our data were pre-processed in the same way: we performed tokenization and truecasing with Moses (Koehn et al., 2007). 3.2 NMT Systems For NMT, we used the Transformer (Vaswani et al., 2017) implemented in Marian (Junczys-Dowmunt et al., 2018) with standard hyper-parameters for training a Transformer base model.5 To compress the vocabulary, we learned 32k byte-pair encoding (BPE) operations (Sennrich et al., 2016b) for each side of the parallel training data. The back-translations were generated through decoding with Marian the sampled monolingual sentences using beam search with a beam size of 12 and a length normalization of 1.0. The back-translated data were then concatenated to the original parallel data and a new NMT model was trained from scratch using the same hyperparameters used to train the model that generated the back-translations. We evaluated all systems with BLEU (Papineni et al., 2002) computed by sacreBLEU (Post, 2018). To evaluate only on the part of the test set that have original text or translationese on the source side, we used the --origlang option of sacreBLEU with the value “non-L1” for translationese texts and “L1” for original texts, where L1 is the source language, and report on their respective BLEU scores.6 3.3 Results in Resource-Rich Conditions Our results with back-translations (BT) and tagged back-translations (T-BT) are presented in Table 1. When using BT, we consistently observed a drop of BLEU scores for original texts for all the translations tasks, with the largest drop of 12.1 BLEU points (en→fr, 2014). Conversely, BLEU scores for translationese texts were improved for most tasks, with the largest gain of 10.4 BLEU points 4For WMT14, we used the “full” version instead of the default filtered version in sacreBLEU that does not contain information on the origin of the source sentences. 5The full list of hyper-parameters is provided in the supplementary material (Appendix A). 6sacreBLEU signatures where “L1” and “L2” respectively indicates a two-letter identifier for the source and target languages of either de-en, en-de, fr-en, or en-fr, and “XXX” the name of the test set: BLEU+case.mixed+lang.L1L2+numrefs.1+{origlang.L1,origlang.nonL2}+smooth.exp+test.XXX+tok.13a+version.1.4.2 5992 System test set de→en en→de all o n-o all o n-o BT 2010 28.9 (+0.5) 33.2 (-0.9) 27.9 (+0.7) 21.8 (-2.3) 24.6 (-5.7) 21.0 (-1.2) 2011 25.3 (-0.3) 29.9 (-1.0) 24.2 (-0.2) 19.9 (-1.4) 23.8 (-1.9) 19.0 (-1.1) 2012 27.1 (+0.3) 27.9 (-1.6) 27.0 (+0.7) 20.4 (-1.2) 24.5 (-4.6) 19.3 (-0.2) 2013 30.3 (+0.3) 34.7 (-1.6) 29.2 (+0.6) 23.8 (-1.9) 25.1 (-2.8) 23.6 (-1.7) 2014 32.8 (+2.2) 27.4 (-2.5) 36.8 (+7.0) 25.4 (-0.5) 23.2 (-3.3) 27.9 (+2.7) 2015 33.8 (+2.4) 22.5 (-1.9) 39.5 (+5.5) 27.2 (-1.1) 28.1 (-2.9) 24.7 (+1.9) 2017 35.5 (+3.0) 27.2 (-1.1) 42.8 (+7.4) 26.4 (-0.1) 26.3 (-3.6) 25.5 (+3.3) 2018 43.9 (+4.6) 32.0 (-1.0) 53.8 (+10.4) 38.0 (-1.4) 38.9 (-5.9) 35.0 (+3.8) 2019 33.1 (-1.5) 31.4 (-4.8) T-BT 2010 29.5 (+1.1) 34.4 (+0.3) 28.4 (+1.2) 25.0 (+0.9) 30.5 (+0.2) 23.4 (+1.2) 2011 26.4 (+0.8) 31.7 (+0.8) 25.2 (+0.8) 22.1 (+0.8) 25.8 (+0.1) 21.0 (+0.9) 2012 28.1 (+1.3) 30.2 (+0.7) 27.7 (+1.4) 22.8 (+1.2) 30.0 (+0.9) 20.9 (+1.4) 2013 30.8 (+0.8) 36.0 (-0.3) 29.6 (+1.0) 26.4 (+0.7) 28.1 (+0.2) 26.1 (+0.8) 2014 32.4 (+1.8) 29.6 (-0.3) 33.8 (+4.0) 27.9 (+2.0) 26.7 (+0.2) 29.4 (+4.2) 2015 33.9 (+2.5) 24.9 (+0.5) 37.7 (+3.7) 29.9 (+1.6) 32.1 (+1.1) 25.6 (+2.8) 2017 35.5 (+3.0) 28.1 (-0.2) 41.2 (+5.8) 28.7 (+2.2) 30.7 (+0.8) 26.0 (+3.8) 2018 43.2 (+3.9) 33.0 (+0.0) 50.4 (+7.0) 41.8 (+2.4) 45.6 (+0.8) 35.5 (+4.3) 2019 35.0 (+0.4) 37.6 (+1.4) System test set fr→en en→fr all o n-o all o n-o BT 2008 22.9 (-1.7) 27.9 (-2.6) 22.2 (-1.5) 23.2 (-0.2) 21.2 (-3.3) 23.6 (+0.5) 2009 26.5 (-2.3) 41.1 (-5.3) 23.9 (-1.6) 27.7 (+1.1) 22.7 (-2.0) 28.4 (+1.4) 2010 29.3 (-1.4) 27.4 (-7.8) 29.5 (+0.5) 28.2 (-0.5) 22.5 (-11.1) 29.8 (+2.5) 2011 29.4 (-1.9) 29.3 (-4.7) 29.4 (-1.1) 30.9 (+0.0) 36.7 (-8.2) 29.3 (+2.1) 2012 29.7 (-1.4) 34.3 (-4.3) 28.6 (-0.6) 28.4 (+1.1) 26.3 (-4.1) 29.0 (+2.5) 2014 36.6 (+0.6) 31.4 (-4.7) 40.3 (+5.6) 32.9 (-3.1) 26.1 (-12.1) 39.6 (+6.1) 2015 36.2 (+0.0) 40.9 (-3.1) 29.8 (+3.5) 35.7 (+1.7) 25.1 (-4.4) 44.9 (+6.5) T-BT 2008 24.5 (-0.1) 29.5 (-1.0) 23.7 (+0.0) 23.8 (+0.4) 25.1 (+0.6) 23.5 (+0.4) 2009 28.9 (+0.1) 46.4 (+0.0) 25.7 (+0.2) 27.3 (+0.7) 25.1 (+0.4) 27.7 (+0.7) 2010 31.2 (+0.5) 35.1 (-0.1) 29.6 (+0.6) 30.0 (+1.3) 34.1 (+0.5) 28.9 (+1.6) 2011 31.8 (+0.5) 33.3 (-0.7) 31.4 (+0.9) 31.6 (+0.7) 45.3 (+0.4) 28.0 (+0.8) 2012 31.8 (+0.7) 38.3 (-0.3) 30.1 (+0.9) 28.9 (+1.6) 31.9 (+1.5) 28.1 (+1.6) 2014 37.3 (+1.3) 36.1 (+0.0) 37.2 (+2.5) 38.2 (+2.2) 39.7 (+1.5) 36.5 (+3.0) 2015 36.6 (+0.4) 43.2 (-0.8) 27.9 (+1.6) 36.0 (+2.0) 30.7 (+1.2) 41.2 (+2.8) Table 1: BLEU scores for NMT systems trained with back-translations (BT) and tagged back-translations (T-BT) for each origin of the source text: original (o) or translationese (n-o). The values in parentheses are the differences between the BLEU scores of the evaluated system and the vanilla system trained without any back-translated data. (de→en, 2018). These results give an answer to Q1: NMT overfits back-translations, potentially due to their much larger size than the original parallel data used for training. Interestingly, using backtranslations does not consistently improve translation quality. We assume that newstest sets may manifest some different characteristics of translationese from one year to another. Prepending a tag (T-BT) had a strong impact on the translation quality for original texts, recovering or even surpassing the quality obtained by the NMT system without back-translated data, always beating BT. The large improvements of BLEU scores over BT show that a tag helps in identifying translationese (answer for Q2). In the supplementary material (Appendix B), we present additional results obtained using more back-translations (up to 150M sentences) showing a similar impact of tags. However, while a tag in such a configuration prevents an even larger drop of the BLEU scores, it is not sufficient to attain a BLEU score similar to the configurations that use less back-translations. Interestingly, the best NMT system was not always the same depending on the translation direction and the origin of the test sets. It is thus possible to select either of the models to obtain the best translation quality given the origin of the source sentences, according to the results on the validation set for instance.7 7Since this observation is rather secondary, we present results for best model selection in the supplementary material (Appendix C). Note also that these BLEU scores can potentially be further increased by using a validation set whose source side is either original texts or translationese respectively to translate original texts or translationese at test time. 5993 System test set de→en en→de all o n-o all o n-o BT 2010 24.1 (+9.5) 27.1 (+12.4) 23.3 (+8.8) 18.0 (+2.9) 21.6 (+2.7) 17.0 (+3.0) 2011 21.0 (+8.1) 23.9 (+10.3) 20.3 (+7.6) 16.3 (+2.3) 19.1 (+2.9) 15.6 (+2.1) 2012 22.2 (+8.6) 21.6 (+8.7) 22.3 (+8.5) 16.4 (+2.5) 19.8 (+2.6) 15.5 (+2.5) 2013 25.0 (+9.0) 28.1 (+9.6) 24.1 (+8.7) 19.6 (+2.9) 20.0 (+3.2) 19.5 (+2.8) 2014 25.1 (+11.3) 20.9 (+8.4) 27.7 (+13.3) 19.7 (+4.5) 18.7 (+3.3) 20.3 (+6.1) 2015 27.1 (+11.8) 18.4 (+6.9) 31.0 (+14.3) 21.5 (+4.0) 22.5 (+3.6) 18.3 (+5.0) 2017 27.6 (+12.5) 21.5 (+8.2) 32.4 (+16.2) 20.7 (+4.0) 20.8 (+2.7) 19.3 (+5.5) 2018 34.3 (+16.4) 25.2 (+10.7) 41.0 (+21.1) 29.3 (+6.7) 30.4 (+5.4) 26.3 (+8.3) 2019 26.1 (+11.9) 24.8 (+4.8) T-BT 2010 24.4 (+9.8) 27.4 (+12.7) 23.6 (+9.1) 18.8 (+3.7) 22.6 (+3.7) 17.7 (+3.7) 2011 21.8 (+8.9) 25.3 (+11.7) 20.9 (+8.2) 16.8 (+2.8) 20.2 (+4.0) 16.0 (+2.5) 2012 22.8 (+9.2) 22.9 (+10.0) 22.8 (+9.0) 17.2 (+3.3) 21.3 (+4.1) 16.1 (+3.1) 2013 25.9 (+9.9) 29.4 (+10.9) 24.9 (+9.5) 20.2 (+3.5) 20.5 (+3.7) 20.2 (+3.5) 2014 25.1 (+11.3) 22.1 (+9.6) 26.8 (+12.4) 20.1 (+4.9) 19.5 (+4.1) 20.6 (+6.4) 2015 27.0 (+11.7) 19.4 (+7.9) 30.5 (+13.8) 22.0 (+4.5) 23.5 (+4.6) 18.2 (+4.9) 2017 27.8 (+12.7) 22.5 (+9.2) 32.0 (+15.8) 21.1 (+4.4) 22.2 (+4.1) 19.2 (+5.4) 2018 34.2 (+16.3) 26.4 (+11.9) 39.8 (+19.9) 30.5 (+7.9) 32.9 (+7.9) 25.5 (+7.5) 2019 26.8 (+12.6) 26.9 (+6.9) System test set fr→en en→fr all o n-o all o n-o BT 2008 20.5 (+2.8) 26.3 (+1.6) 19.7 (+3.0) 21.3 (+4.1) 21.2 (+3.2) 21.3 (+4.3) 2009 24.0 (+3.3) 39.7 (+5.4) 21.2 (+3.1) 24.8 (+6.1) 21.6 (+5.1) 25.2 (+6.2) 2010 26.4 (+4.7) 28.3 (+4.0) 25.4 (+5.0) 26.1 (+6.0) 29.9 (+6.3) 24.9 (+5.8) 2011 26.2 (+3.4) 26.9 (+0.7) 26.0 (+4.1) 28.1 (+6.3) 38.1 (+8.6) 25.5 (+5.8) 2012 26.4 (+4.0) 31.4 (+1.3) 25.2 (+4.6) 26.4 (+6.3) 27.2 (+5.9) 26.1 (+6.3) 2014 32.2 (+7.8) 28.9 (+4.5) 33.6 (+10.5) 31.4 (+7.6) 28.9 (+4.5) 32.9 (+10.3) 2015 30.0 (+5.9) 34.0 (+5.0) 24.8 (+7.1) 29.9 (+8.0) 23.7 (+5.4) 35.5 (+10.1) T-BT 2008 21.3 (+3.6) 27.5 (+2.8) 20.4 (+3.7) 20.8 (+3.6) 21.7 (+3.7) 20.6 (+3.6) 2009 24.6 (+3.9) 41.6 (+7.3) 21.5 (+3.4) 23.7 (+5.0) 20.8 (+4.3) 24.1 (+5.1) 2010 27.0 (+5.3) 29.6 (+5.3) 25.7 (+5.3) 25.6 (+5.5) 29.8 (+6.2) 24.3 (+5.2) 2011 27.4 (+4.6) 29.7 (+3.5) 26.7 (+4.8) 27.3 (+5.5) 36.9 (+7.4) 24.8 (+5.1) 2012 27.3 (+4.9) 33.3 (+3.2) 25.7 (+5.1) 25.6 (+5.5) 26.8 (+5.5) 25.2 (+5.4) 2014 31.8 (+7.4) 29.9 (+5.5) 32.1 (+9.0) 31.0 (+7.2) 30.4 (+6.0) 30.9 (+8.3) 2015 30.6 (+6.5) 35.6 (+6.6) 23.7 (+6.0) 29.2 (+7.3) 24.0 (+5.7) 34.2 (+8.8) Table 2: BLEU scores for low-resource configurations. 3.4 Results in Low-Resource Conditions In low-resource conditions, as reported in Table 2, the translation quality can be notably improved by adding back-translations. Using BT, we observed improvements of BLEU scores ranging from 0.7 (fr→en, 2011) to 12.4 (de→en, 2010) BLEU points for original texts and from 2.1 (en→de, 2011) to 21.1 (de→en, 2018) BLEU points for translationese texts. These results remain in line with one of the initial motivations for using backtranslation: improving translation quality in lowresource conditions. In this setting without backtranslated data, the data in the target language is too small for the NMT system to learn reasonably good representations for the target language. Adding 5 times more data in the target language, through back-translation, clearly helps the systems without any negative impact of the noisiness of the backtranslations that were generated by the initial system. We assume here that since the quality of the back-translations is very low, their characteristics are quite different from the ones of translationese texts. This is confirmed by our observation that adding the tag has only a negligible impact on the BLEU scores for all the tasks (answer to Q3). 3.5 Tagged Test Sets A tag on back-translations helps identifying translationese during NMT training. Thus, adding the same tag on the test sets should have a very different impact depending on the origin of the source sentences. If we tag original sentences and decode them with a T-BT model, then we enforce the decoding of translationese. Since we mislead the decoder, translation quality should drop. On the other hand, by tagging translationese sentences, we help the decoder that can now rely on the tag to be very confident that the text to decode is translationese. Our results presented in Table 3 confirm these 5994 System de→en en→de fr→en en→fr 2017 2018 2017 2018 2012 2015 2012 2015 tagged original -2.0 -2.6 -5.9 -9.6 -7.5 -4.9 -10.1 -11.1 tagged non-original +1.6 +3.4 +0.8 +1.6 -3.1 +1.4 -0.3 +3.6 Table 3: Results with tagged test sets, either original or non-original, decoded with the T-BT model in the highresource condition. Delta BLEU scores are computed relatively to the configurations with untagged test sets. assumptions. We observed a drop of BLEU scores when decoding tagged original texts with the T-BT model, while we saw an improvement of translation quality for 6 out of 8 test sets when decoding tagged translationese texts. The remaining 2 test sets for which we did not observed any improvements are newstest2012 for both translation directions of enfr. It potentially indicates a mismatch between the characteristics of translationese in newstest2012 and those exhibited by back-translations used for training the T-BT model. 4 Discussions We empirically demonstrated that training NMT on back-translated data overfits some of its characteristics that are partly similar to those of translationese. Using back-translation improves translation quality for translationese texts but worsens it for original texts. Previous work (Graham et al., 2019; Zhang and Toral, 2019) showed that stateof-the-art NMT systems are better in translating translationese than original texts. Our results show that this is partly due to the use of back-translations which is also confirmed by concurrent and independent work (Bogoychev and Sennrich, 2019; Edunov et al., 2019). Adding a tag to back-translations prevents a large drop of translation quality on original texts while improvements of translation quality for translationese texts remain and may be further boosted by tagging test sentences at decoding time. Moreover, in low-resource conditions, we show that the overall tendency is significantly different from the high-resource conditions: backtranslation improves translation quality for both translationese and original texts while adding a tag to back-translations has only a little impact. We conclude from this study that training NMT on back-translated data, in high-resource conditions, remains reasonable when the user knows in advance that the system will be used to translate translationese texts. If the user does not know it a priori, a tag should be added to back-translations during training to prevent a possible large drop of translation quality. For future work, following the work on automatic identification of translationese (Rabinovich and Wintner, 2015; Rubino et al., 2016), we plan to investigate the impact of tagging translationese texts inside parallel training data, such as parallel sentences collected from the Web. Acknowledgments We would like to thank the reviewers for their useful comments and suggestions. A part of this work was conducted under the program “Research and Development of Enhanced Multilingual and Multipurpose Speech Translation System” of the Ministry of Internal Affairs and Communications (MIC), Japan. Benjamin Marie was partly supported by JSPS KAKENHI Grant Number 20K19879 and the tenure-track researcher start-up fund in NICT. Atsushi Fujita was partly supported by JSPS KAKENHI Grant Number 19H05660. References Lo¨ıc Barrault, Ondˇrej Bojar, Marta R. Costa-juss`a, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M¨uller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 Conference on Machine Translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1–61, Florence, Italy. Association for Computational Linguistics. Nikolay Bogoychev and Rico Sennrich. 2019. Domain, translationese and noise in synthetic data for neural machine translation. arXiv preprint arXiv:1911.03362. Franck Burlot and Franc¸ois Yvon. 2018. Using monolingual data in neural machine translation: a systematic study. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 144–155, Brussels, Belgium. Association for Computational Linguistics. Isaac Caswell, Ciprian Chelba, and David Grangier. 2019. Tagged back-translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 53–63, Florence, Italy. Association for Computational Linguistics. 5995 Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489–500, Brussels, Belgium. Association for Computational Linguistics. Sergey Edunov, Myle Ott, Marc’Aurelio Ranzato, and Michael Auli. 2019. On the evaluation of machine translation systems trained with back-translation. arXiv preprint arXiv:1908.05204. Yvette Graham, Barry Haddow, and Philipp Koehn. 2019. Translationese in machine translation evaluation. arXiv preprint arXiv:1906.09833. Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, Andr´e F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In Proceedings of ACL 2018, System Demonstrations, pages 116– 121, Melbourne, Australia. Association for Computational Linguistics. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics. Sara Laviosa-Braithwaite. 1998. Universals of translation. Routledge encyclopedia of translation studies. London: Routledge, pages 288–291. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, USA. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Ella Rabinovich and Shuly Wintner. 2015. Unsupervised identification of translationese. Transactions of the Association for Computational Linguistics, 3:419–432. Raphael Rubino, Ekaterina Lapshinova-Koltunski, and Josef van Genabith. 2016. Information density and quality estimation features as translationese indicators for human translation classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 960–970, San Diego, USA. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Mike Zhang and Antonio Toral. 2019. The effect of translationese in machine translation test sets. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 73– 81, Florence, Italy. Association for Computational Linguistics. A NMT system hyper-parameters For training NMT systems with Marian 1.7.6 (1d4ba73), we used the hyper-parameters, on 8 GPUs, presented by Table 4 and kept the remaining ones with their default values. 5996 --type transformer --train-sets para.L1 para.L2 --model model.npz --max-length 150 --mini-batch-fit --valid-freq 5000 --save-freq 5000 --workspace 4000 --disp-freq 500 --valid-sets dev.bpe32k.L1 dev.bpe32k.L2 --beam-size 12 --normalize=1 --valid-mini-batch 16 --overwrite --early-stopping 5 --cost-type=ce-mean-words --valid-metrics ce-mean-words bleu --keep-best --enc-depth 6 --dec-depth 6 --transformer-dropout 0.1 --learn-rate 0.0003 --lr-warmup 16000 --lr-decay-inv-sqrt 16000 --label-smoothing 0.1 --dim-vocabs 32000 32000 --optimizer-params 0.9 0.98 1e-09 --clip-norm 5 --sync-sgd --exponential-smoothing Table 4: Parameters of Marian used for training our NMT systems. B Experiments with Larger Quantity of Back-translaitons Table 5 presents the results using much larger backtranslations in the high-resource conditions. C Best Model Selection As discussed in Section 3.3, among the original model, the one trained with back-translation (BT), and the one trained with tagged back-translation (TBT), the best-performing model is not always the same depending on the translation direction. For de→en and en→de, the best model is always T-BT. However, for fr→en, the system that does not use any back-translation is the best to translate original texts while T-BT is the best for translationese texts. For en→fr, the best system for translating translationese texts is BT while the best system for translating original texts is T-BT. This selection is performed by evaluating the translation quality for each model on the validation sets original and translationese texts. By applying this selection strategy, we can significantly improve the overall translation quality for given test sets, as reported in Table 6. 5997 System test set de→en en→de all o n-o all o n-o BT 2010 28.7 (+0.3) 32.0 (-2.1) 27.9 (+0.7) 22.3 (-1.8) 25.8 (-4.5) 21.3 (-0.9) 2011 24.6 (-1.0) 29.2 (-1.7) 23.5 (-0.9) 19.9 (-1.4) 23.1 (-2.6) 19.1 (-1.0) 2012 26.4 (-0.4) 27.1 (-2.4) 26.2 (-0.1) 20.7 (-0.9) 25.2 (-3.9) 19.5 (+0.0) 2013 29.6 (-0.4) 33.1 (-3.2) 28.6 (+0.0) 23.8 (-1.9) 24.4 (-3.5) 23.7 (-1.6) 2014 32.4 (+1.8) 25.7 (-4.2) 37.3 (+7.5) 26.0 (+0.1) 23.4 (-3.1) 28.9 (+3.7) 2015 33.4 (+2.0) 21.2 (-3.2) 39.4 (+5.4) 27.4 (-0.9) 27.7 (-3.3) 25.7 (+2.9) 2017 34.6 (+2.1) 25.7 (-2.6) 42.2 (+6.8) 26.6 (+0.1) 25.9 (-4.0) 26.4 (+4.2) 2018 43.2 (+3.9) 30.1 (-2.9) 53.9 (+10.5) 38.1 (-1.3) 38.8 (-6.0) 35.4 (+4.2) 2019 31.4 (-3.2) 32.1 (-4.1) T-BT 2010 29.5 (+1.1) 34.1 (+0.0) 28.3 (+1.1) 24.9 (+0.8) 29.3 (-1.0) 23.7 (+1.5) 2011 25.9 (+0.3) 30.4 (-0.5) 24.8 (+0.4) 21.9 (+0.6) 26.0 (+0.3) 20.7 (+0.6) 2012 27.5 (+0.7) 28.8 (-0.7) 27.3 (+1.0) 22.7 (+1.1) 28.8 (-0.3) 21.1 (+1.6) 2013 30.7 (+0.7) 35.2 (-1.1) 29.6 (+1.0) 26.1 (+0.4) 27.4 (-0.5) 25.9 (+0.6) 2014 32.5 (+1.9) 28.2 (-1.7) 35.4 (+5.6) 28.2 (+2.3) 26.8 (+0.3) 30.0 (+4.8) 2015 33.7 (+2.3) 23.7 (-0.7) 38.3 (+4.3) 29.6 (+1.3) 31.1 (+0.1) 26.7 (+3.9) 2017 35.2 (+2.7) 27.3 (-1.0) 41.5 (+6.1) 28.3 (+1.8) 29.8 (-0.1) 26.3 (+4.1) 2018 43.4 (+4.1) 32.4 (-0.6) 51.5 (+8.1) 41.7 (+2.3) 45.0 (+0.2) 36.1 (+4.9) 2019 34.3 (-0.3) 36.5 (+0.3) System test set fr→en en→fr all o n-o all o n-o BT 2008 20.8 (-3.8) 27.4 (-3.1) 19.8 (-3.9) 21.6 (-1.8) 17.5 (-7.0) 22.5 (-0.6) 2009 23.9 (-4.9) 38.3 (-8.1) 21.2 (-4.3) 26.4 (-0.2) 20.4 (-4.3) 27.3 (+0.3) 2010 27.2 (-3.5) 27.7 (-7.5) 27.0 (-2.0) 26.7 (-2.0) 19.2 (-14.4) 28.8 (+1.5) 2011 27.3 (-4.0) 27.3 (-6.7) 27.3 (-3.2) 28.9 (-2.0) 31.4 (-13.5) 28.2 (+1.0) 2012 26.8 (-4.3) 31.4 (-7.2) 25.7 (-3.5) 26.5 (-0.8) 22.2 (-8.2) 27.7 (+1.2) 2014 33.5 (-2.5) 28.8 (-7.3) 36.9 (+2.2) 29.9 (-6.1) 20.6 (-17.6) 39.4 (+5.9) 2015 31.7 (-4.5) 35.5 (-8.5) 27.1 (+0.8) 32.4 (-1.6) 18.4 (-11.1) 44.9 (+6.5) T-BT 2008 24.7 (+0.1) 30.6 (+0.1) 23.8 (+0.1) 24.1 (+0.7) 25.6 (+1.1) 23.7 (+0.6) 2009 28.4 (-0.4) 45.3 (-1.1) 25.2 (-0.3) 27.7 (+1.1) 25.7 (+1.0) 28.0 (+1.0) 2010 31.2 (+0.5) 34.2 (-1.0) 29.8 (+0.8) 30.6 (+1.9) 34.5 (+0.9) 29.5 (+2.2) 2011 31.8 (+0.5) 32.7 (-1.3) 31.5 (+1.0) 31.6 (+0.7) 45.5 (+0.6) 28.0 (+0.8) 2012 31.6 (+0.5) 37.5 (-1.1) 30.2 (+1.0) 29.2 (+1.9) 31.9 (+1.5) 28.4 (+1.9) 2014 37.9 (+1.9) 35.6 (-0.5) 38.7 (+4.0) 38.5 (+2.5) 39.7 (+1.5) 37.0 (+3.5) 2015 36.2 (+0.0) 42.2 (-1.8) 28.3 (+2.0) 36.3 (+2.3) 30.2 (+0.7) 42.1 (+3.7) Table 5: BLEU scores for all the systems in the high-resource conditions using 150M back-translations or the entire news crawl corpus for en→fr (76.6M sentences). Sys. 2008 2009 2010 2011 2012 2014 2015 fr→en en→fr fr→en en→fr fr→en en→fr fr→en en→fr fr→en en→fr fr→en en→fr fr→en en→fr vanilla 24.6 23.4 28.8 26.6 30.7 28.7 31.3 30.9 31.1 29.5 36.0 36.0 36.2 34.0 BT 22.9 23.2 26.5 27.7 29.3 28.2 29.4 30.9 29.7 28.4 36.6 32.9 36.2 35.7 T-BT 24.5 23.8 28.9 27.3 31.2 30.0 31.8 31.6 31.8 28.9 37.3 38.2 36.6 36.0 selection 24.7 23.9 29.0 28.2 31.5 30.9 33.0 32.7 32.5 29.9 37.5 38.9 36.3 37.8 Table 6: BLEU scores for all the systems for en-fr on the overall test sets. “selection” denotes that decoding is performed by using the best model given the origin of the source sentence.
2020
532
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5998–6003 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 5998 Worse WER, but Better BLEU? Leveraging Word Embedding as Intermediate in Multitask End-to-End Speech Translation Shun-Po Chuang1, Tzu-Wei Sung2, Alexander H. Liu1, and Hung-yi Lee1 1National Taiwan University, Taiwan 2University of California San Diego, USA {f04942141, b03902042, r07922013, hungyilee}@ntu.edu.tw Abstract Speech translation (ST) aims to learn transformations from speech in the source language to the text in the target language. Previous works show that multitask learning improves the ST performance, in which the recognition decoder generates the text of the source language, and the translation decoder obtains the final translations based on the output of the recognition decoder. Because whether the output of the recognition decoder has the correct semantics is more critical than its accuracy, we propose to improve the multitask ST model by utilizing word embedding as the intermediate. 1 Introduction Speech translation (ST) increasingly receives attention from the machine translation (MT) community recently. To learn the transformation between speech in the source language and the text in the target language, conventional models pipeline automatic speech recognition (ASR) and text-to-text MT model (B´erard et al., 2016). However, such pipeline systems suffer from error propagation. Previous works show that deep end-to-end models can outperform conventional pipeline systems with sufficient training data (Weiss et al., 2017; Inaguma et al., 2019; Sperber et al., 2019). Nevertheless, well-annotated bilingual data is expensive and hard to collect (Bansal et al., 2018a,b; Duong et al., 2016). Multitask learning plays an essential role in leveraging a large amount of monolingual data to improve representation in ST. Multitask ST models have two jointly learned decoding parts, namely the recognition and translation part. The recognition part firstly decodes the speech of source language into the text of source language, and then based on the output of the recognition part, the translation part generates the text in the target language. Variant multitask models have been explored (Anastasopoulos and Chiang, 2018), which shows the improvement in low-resource scenario. Although applying the text of source language as the intermediate information in multitask end-toend ST empirically yielded improvement, we argue whether this is the optimal solution. Even though the recognition part does not correctly transcribe the input speech into text, the final translation result would be correct if the output of the recognition part preserves sufficient semantic information for translation. Therefore, we explore to leverage word embedding as the intermediate level instead of text. In this paper, we apply pre-trained word embedding as the intermediate level in the multitask ST model. We propose to constrain the hidden states of the decoder of the recognition part to be close to the pre-trained word embedding. Prior works on word embedding regression show improved results on MT (Jauregi Unanue et al., 2019; Kumar and Tsvetkov, 2018). Experimental results show that the proposed approach obtains improvement to the ST model. Further analysis also shows that constrained hidden states are approximately isospectral to word embedding space, indicating that the decoder achieves speech-to-semantic mappings. 2 Multitask End-to-End ST model Our method is based on the multitask learning for ST (Anastasopoulos and Chiang, 2018), including speech recognition in the source language and translation in the target language, as shown in Fig. 1(a). The input audio feature sequence is first encoded into the encoder hidden state sequence h = h1, h2, . . . , hT with length T by the pyramid encoder (Chan et al., 2015). To present speech recognition in the source language, the attention mechanism and a decoder is employed to produce source decoder sequence ˆs = ˆs1, ˆs2, . . . , ˆsM, where M is the number of decoding steps in the source language. For each decoding step m, the probability P(ˆym) of predicting the token ˆym in the source language vocabulary can be computed based on the corresponding decoder state ˆsm. 5999 […, 𝑦%& , …] […, 𝑦),… ] Attention Source Language Speech Features [ ℎ+ , ℎ, , …, ℎ-] Attention Source Language Decoder [ 𝑠̂+ , 𝑠̂,, … , 𝑠̂0] Attention Concatenation Target Language Decoder Recognition ( Intermedia Level ) Translation […, 𝑃(𝑦%&),… ] [… , 𝑃(𝑦)),… ] Cross Entropy Linear Linear [… , 𝑠̂& , … ] Linear […, 𝑒& , …] […, 𝑒̂5%6 , …] Cosine Distance … 𝑒̂+ 𝑒̂|8| Cosine Similarity […, 𝑦%&, …] Cross Entropy (c) Cosine Softmax (CS) (a) Multitask End-to-End Model Speech Encoder (b) Cosine Distance (CD) [… , 𝑠̂& , … ] Linear […, 𝑒& , …] [… , 𝑃9:(𝑦%&), … ] 𝑒̂, 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 Source Language Decoder 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 Source Language Decoder 𝐸B Figure 1: (a) Multitask ST model. Dotted arrows indicate steps in the recognition part. Solid arrows indicate steps in the translation part. (b) Directly learn word embedding via cosine distance. (c) Learn word embedding via cosine softmax function. Both (b)(c) are the recognition part in (a). To perform speech translation in the target language, both the source language decoder state sequence ˆs and the encoder state sequence h will be attended and treated as the target language decoder’s input. The hidden state of target language decoder can then be used to derived the probability P(yq) of predicting token yq in the target language vocabulary for every decoding step q. Given the ground truth sequence in the source language ˆy = ˆy1, ˆy2, . . . , ˆyM and the target language y = y1, y2, . . . , yQ with length Q, multitask ST can be trained with maximizing log likelihood in both domains. Formally, the objective function of multitask ST can be written as: LST = α M Lsrc + β QLtgt = α M X m −log P(ˆym) + β Q X q −log P(yq), (1) where α and β are the trade-off factors to balance between the two tasks. 3 Proposed Methods We propose two ways to help the multitask endto-end ST model capture the semantic relation between word tokens by leveraging the source language word embedding as intermediate level. ˆE = {ˆe1, ˆe2, ...ˆe|V |}, where V is the vocabulary set and ˆev ∈RD is the embedding vector with dimension D for any word v ∈V , in the recognition task. We choose the source language decoder state (embedding) ˆs to reinforce since it is later used in the translation task. To be more specific, we argue that the embedding generated by the source language decoder should be more semantically correct in order to benefit the translation task. Given the pre-trained source language word embedding ˆE, we proposed to constrain the source decoder state ˆsm at step m to be close to its corresponding word embedding ˆeˆym with the two approaches detailed in the following sections. 3.1 Directly Learn Word Embedding Since semantic-related words would be close in terms of cosine distance (Mikolov et al., 2018), a simple idea is to minimize the cosine distance (CD) between the source language decoder hidden state ˆsm and the corresponding word embedding ˆeˆym for every decode step m, LCD = X m 1 −cos(fθ(ˆsm), ˆeˆym) = X m 1 − fθ(ˆsm) · ˆeˆym ∥fθ(ˆsm)∥∥ˆeˆym∥, (2) where fθ(·) is a learnable linear projection to match the dimensionality of word embedding and decoder state. With this design, the network architecture of the target language decoder would not be limited by the dimension of word embedding. Fig. 1(b) illustrates this approach. By replacing Lsrc in Eq. (1) with LCD, semantic learning from word embedding for source language recognition can be achieved. 3.2 Learn Word Embedding via Probability Ideally, using word embedding as the learning target via minimizing CD can effectively train the decoder to model the semantic relation existing in the embedding space. However, such an approach suffers from the hubness problem (Faruqui et al., 2016) of word embedding in practice (as we later discuss in Sec. 4.5). To address this problem, we introduce cosine softmax (CS) function (Liu et al., 2017a,b) to learn speech-to-semantic embedding mappings. Given the decoder hidden state ˆsm and the word embedding ˆE, the probability of the target word ˆym is defined as PCS(ˆym) = exp(cos(fθ(ˆsm), ˆeˆym)/τ) P ˆev∈ˆE exp(cos(fθ(ˆsm), ˆev)/τ), (3) 6000 where cos(·) and fθ(·) are from Eq. (2), and τ is the temperature of softmax function. Note that since the temperature τ re-scales cosine similarity, the hubness problem can be mitigated by selecting a proper value for τ. Fig. 1(c) illustrates the approach. With the probability derived from cosine softmax in Eq. (3), the objective function for source language decoder can be written as LCS = X m −log PCS(ˆym). (4) By replacing Lsrc in Eq. (1) with LCS, the decoder hidden state sequence ˆs is forced to contain semantic information provided by the word embedding. 4 Experiments 4.1 Experimental Setup We used Fisher Spanish corpus (Graff et al., 2010) to perform Spanish speech to English text translation. And we followed previous works (Inaguma et al., 2019) for pre-processing steps, and 40/160 hours of train set, standard dev-test are used for the experiments. Byte-pair-encoding (BPE) (Kudo and Richardson, 2018) was applied to the target transcriptions to form 10K subwords as the target of the translation part. Spanish word embeddings were obtained from FastText pre-trained on Wikipedia (Bojanowski et al., 2016), and 8000 Spanish words were used in the recognition part. The encoder is a 3-layer 512-dimensional bidirectional LSTM with additional convolution layers, yielding 8× down-sampling in time. The decoders are 1024-dimensional LSTM, and we used one layer in the recognition part and two layers in the translation part. The models were optimized using Adadelta with 10−6 as the weight decay rate. Scheduled sampling with probability 0.8 was applied to the decoder in the translation part. Experiments ran 1.5M steps, and models were selected by the highest BLEU on four transcriptions per speech in dev set. 4.2 Speech Translation Evaluation Baseline: We firstly built the single-task end-toend model (SE) to set a baseline for multitask learning, which resulted in 34.5/34.51 BLEU on dev and test set respectively, which showed comparable results to Salesky et al. (2019). Multitask end-to-end model (ME) mentioned in Sec. 2 is another baseline. By applying multitask learning in addition, (a) 160 hours (b) 40 hours dev test dev test SE 34.50 34.51 17.41 15.44 ME 35.35 35.49 23.30 20.40 CD 33.06 33.65 23.53 20.87 CS 35.84 36.32 23.54 21.72 Table 1: BLEU scores trained on different size of data. we could see that ME outperforms SE in all conditions. High-resource: Column (a) in Table 1 showed the results trained on 160 hours of data. CD and CS represent the proposed methods mentioned in Sec. 3.1 and 3.2 respectively. We got mixed results on further applying pre-trained word embedding on ME. CD degraded the performance, which is even worse than SE, but CS performed the best. Results showed that directly learn word embedding via cosine distance is not a good strategy in the high-resource setting, but integrating similarity with cosine softmax function can significantly improve performance. We leave the discussion in Sec. 4.5. Low-resource: We also experimented on 40 hours subset data for training, as shown in column (b) in Table 1. We could see that ME, CD and CS overwhelmed SE in low-resource setting. Although CD resulted in degrading performance in high-resource setting, it showed improvements in low-resource scenario. CS consistently outperformed ME and CD on different data size, showing it is robust on improving ST task. 4.3 Analysis of Recognition Decoder Output In this section, we analyzed hidden states s by existing methods. For each word v in corpus, we denoted its word embedding ˆev as pre-trained embedding, and ev as predicted embedding. Note that because a single word v could be mapped by multiple audio segments, we took the average of all its predicted embedding. We obtained the top 500 frequent words in the whole Fisher Spanish corpus, and tested on the sentences containing only these words in test set. Eigenvector Similarity: To verify our proposed methods can constrain hidden states in the word embedding space, we computed eigenvector similarity between predicted embedding and pre-trained embedding space. The metric derives from Laplacian eigenvalues and represents how similar be6001 160 hours 40 hours dev test dev test ME 16.50 18.58 13.80 15.09 CD 2.60 3.44 3.95 3.63 CS 11.55 13.76 8.62 9.80 Table 2: Eigenvector similarity. 160 hours 40 hours P@1 P@5 P@1 P@5 ME 1.85 6.29 1.11 9.62 CD 61.48 77.40 56.30 69.25 CS 17.78 35.19 10.37 25.19 Table 3: Precision@k of semantic alignment on test set. tween two spaces, the lower value on the metric, the more approximately isospectral between the two spaces. Previous works showed that the metric is correlated to the performance of translation task (Søgaard et al., 2018; Chung et al., 2019). As shown in Table 2, predicted embedding is more similar to pre-trained embedding when models trained on sufficient data (160 v.s 40 hours). CD is the most similar case among the three cases, and ME is the most different case. Results indicated that our proposals constrain hidden states in pre-trained embedding space. Semantic Alignment: To further verify if predicted embedding is semantically aligned to pretrained embedding, we applied Procrustes alignment (Conneau et al., 2017; Lample et al., 2017) method to learn the mapping between predicted embedding and pre-trained embedding. Top 50 frequent words were selected to be the training dictionary, and we evaluated on the remaining 450 words with cross-domain similarity local scaling (CSLS) method. Precision@k (P@k, k=1,5) were reported as measurements. As shown in Table 3, CD performed the best, and ME was the worst one. This experiment reinforced that our proposals can constrain hidden states to the similar structure of word embedding space. 4.4 Speech Recognition Evaluation We further analyzed the results of speech recognition for ME and CS. To obtain the recognition results from Eq (3), simply take arg maxv PCS(v). The word error rate (WER) of the source language recognition was reported in Table 4. Combining the results shown in Table 1, we could see that CS 160 hours 40 hours dev test dev test ME 43.13 38.57 53.42 54.70 CS 50.15 44.43 57.63 57.21 Table 4: Word error rate (%) trained on different size of data. has worse WER, but higher BLEU compared with ME. We concluded that although leveraging word embedding at the intermediate level instead of text results in worse performance in speech recognition (this indicates that the WER of the recognition part does not fully determine the translation performance), the semantic information could somewhat help multitask models generate better translation in terms of BLEU. We do not include the WER of CD in Table 1 because its WER is poor (>100%), but interestingly, the BLEU of CD is still reasonable, which is another evidence that WER of the intermediate level is not the key of translation performance. 4.5 Cosine Distance (CD) v.s. Softmax (CS) Based on experimental results, we found that proposals are possible to map speech to semantic space. With optimizing CS, BLEU consistently outperformed ME, which shows that utilizing semantic information truly helps on ST. Directly minimizing cosine distance made the predicted embedding space closest to pre-trained embedding space, but performed inconsistently on BLEU in different data sizes. We inferred that the imbalance word frequency training and hubness problem (Faruqui et al., 2016) in word embedding space made hidden states not discriminated enough for the target language decoder while optimizing CS can alleviate this issue. 5 Conclusions Our proposals showed that utilizing word embedding as intermediate helps with the ST task, and it is possible to map speech to the semantic space. We also observed that lower WER in source language recognition not imply higher BLEU in target language translation. This work is the first attempt to utilize word embedding in the ST task, and further techniques can be applied upon this idea. For example, crosslingual word embedding mapping methods can be considered within the ST model to shorten the distance between MT and ST tasks. 6002 References Antonios Anastasopoulos and David Chiang. 2018. Tied multitask learning for neural speech translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 82–91, New Orleans, Louisiana. Association for Computational Linguistics. Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. 2018a. Lowresource speech-to-text translation. arXiv preprint arXiv:1803.09164. Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, and Sharon Goldwater. 2018b. Pre-training on high-resource speech recognition improves low-resource speech-to-text translation. arXiv preprint arXiv:1809.01431. Alexandre B´erard, Olivier Pietquin, Christophe Servan, and Laurent Besacier. 2016. Listen and translate: A proof of concept for end-to-end speech-to-text translation. arXiv preprint arXiv:1612.01744. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606. William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. 2015. Listen, attend and spell. arXiv preprint arXiv:1508.01211. Yu-An Chung, Wei-Hung Weng, Schrasing Tong, and James Glass. 2019. Towards unsupervised speechto-text translation. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7170–7174. IEEE. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087. Long Duong, Antonios Anastasopoulos, David Chiang, Steven Bird, and Trevor Cohn. 2016. An attentional model for speech translation without transcription. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 949–959. Manaal Faruqui, Yulia Tsvetkov, Pushpendre Rastogi, and Chris Dyer. 2016. Problems with evaluation of word embeddings using word similarity tasks. arXiv preprint arXiv:1605.02276. David Graff, Shudong Huang, Ingrid Cartagena, Kevin Walker, , and Christopher Cieri. 2010. Fisher spanish speech (ldc2010s01). https://catalog.ldc.upenn.edu/LDC2010S01. Hirofumi Inaguma, Kevin Duh, Tatsuya Kawahara, and Shinji Watanabe. 2019. Multilingual end-to-end speech translation. arXiv preprint arXiv:1910.00254. Inigo Jauregi Unanue, Ehsan Zare Borzeshi, Nazanin Esmaili, and Massimo Piccardi. 2019. ReWE: Regressing word embeddings for regularization of neural machine translation systems. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 430–436, Minneapolis, Minnesota. Association for Computational Linguistics. Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226. Sachin Kumar and Yulia Tsvetkov. 2018. Von mises-fisher loss for training sequence to sequence models with continuous outputs. arXiv preprint arXiv:1812.04616. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2017. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043. Yu Liu, Hongyang Li, and Xiaogang Wang. 2017a. Learning deep features via congenerous cosine loss for person recognition. arXiv preprint arXiv:1702.06890. Yu Liu, Hongyang Li, and Xiaogang Wang. 2017b. Rethinking feature discrimination and polymerization for large-scale recognition. arXiv preprint arXiv:1710.00870. Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018). Elizabeth Salesky, Matthias Sperber, and Alan W Black. 2019. Exploring phoneme-level speech representations for end-to-end speech translation. arXiv preprint arXiv:1906.01199. Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. 2018. On the limitations of unsupervised bilingual dictionary induction. arXiv preprint arXiv:1805.03620. Matthias Sperber, Graham Neubig, Jan Niehues, and Alex Waibel. 2019. Attention-passing models for robust and data-efficient end-to-end speech translation. Transactions of the Association for Computational Linguistics, 7:313–325. 6003 Ron J Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, and Zhifeng Chen. 2017. Sequence-tosequence models can directly translate foreign speech. arXiv preprint arXiv:1703.08581. A Appendix A.1 Single-task end-to-end model One of our baseline models is a single-task end-toend model, which is abbreviated as SE in the previous section. SE was trained using the source language speech and the target language text. It shares the same architecture with the multitask model but without the source language text decoding (without the recognition part in Fig. 1(a)). And its objective function can be written as: LSE = Ltgt = X q −log P(yq). (5) Further details can be referred to (Anastasopoulos and Chiang, 2018). A.2 Using different Word Embeddings Our proposed model benefits from publicly available pre-trained word embedding, which is easyto-obtain yet probably coming from the domains different from testing data. It can bring to ST models in a simple plug-in manner. In Sec. 4.2, we used word embedding trained on Wikipedia. To demonstrate the improvement of using different word embeddings, we additionally provide results of ST models using word embeddings trained on Fisher Spanish corpus (train and dev set) in Table 5. Here we use the abbreviation of word embedding trained on Wikipedia as W-emb and word embedding trained on Fisher Spanish corpus as F-emb. In CD/CS method, using F-emb obtained 0.27/0.61 improvement from using W-emb on dev set. And, CD got 0.15 improvement but CS got 0.51 degrading performance on test set. The improvements show that using word embeddings trained in the related domain helps on the performance. In CD method, although using F-emb improves the performance, it still under-performed ME method. It indicates that the selection of adopting methods is critical. In CS method, it got a great improvement on dev set but not on test set. It shows that using F-emb does help with the performance, but using word embedding trained on rich data (Wemb) could provide additional information that can generally extend to the test set. Word Embedding Source 160 hours dev test ME 35.35 35.49 CD Wikipedia 33.06 33.65 Fisher Spanish 33.33 33.80 CS Wikipedia 35.84 36.32 Fisher Spanish 36.45 35.81 Table 5: BLEU scores on using different pre-trained word embeddings. In general, whether using F-emb or W-emb as the training target, the experimental results show consistency to the discussion in Sec. 4.2.
2020
533
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6004–6009 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6004 Neural-DINF: A Neural Network based Framework for Measuring Document Influence Jie Tan, Changlin Yang, Ying Li, Siliang Tang∗, Chen Huang, Yueting Zhuang Zhejiang University {tanjie95, yclzju, ying.li}@zju.edu.cn, {siliang, beside.huang, yzhuang}@zju.edu.cn Abstract Measuring the scholarly impact of a document without citations is an important and challenging problem. Existing approaches such as Document Influence Model (DIM) are based on dynamic topic models, which only consider the word frequency change. In this paper, we use both frequency changes and word semantic shifts to measure document influence by developing a neural network based framework. Our model has three steps. Firstly, we train word embeddings for different time periods. Subsequently, we propose an unsupervised method to align vectors for different time periods. Finally, we compute the influence value of documents. Our experimental results show that our model outperforms DIM. 1 Introduction Identifying the most influential articles is of great importance in many areas of research. It is often the case that we are increasingly exposed to numerous papers published every day. Research on influence evaluation can be applied to measure the scholarly impact of universities and research facilities. Besides, it helps researchers to distinguish valuable research work from a large number of scientific papers. The common approach of assessing an article’s research impact is to count the number of explicit references to it. However, citations are often not available. For example, collections including blog posts and government documents adopt ideas proposed in the documents without explicit references (Stringer et al., 2008; Macroberts and Macroberts, 2010). To identify influential articles without citations, Gerrish and Blei (2010) and Gerow et al. (2018) proposed probabilistic methods, which are based on dynamic topic models (Blei and Lafferty, 2006). ∗Corresponding Author. They aimed to identify influential articles by examining the word frequency change over time. In this paper, we aim to use both word frequency changes and word semantic shifts on measuring document influence without citations. For our purpose, we propose a neural network based method called Neural-DINF, which stands for a Neural Network based Framework for measuring Document Influence. Our idea is that words that have semantic shifts across time contribute significantly to the influence of a document. Recent studies show that words whose word embeddings across different time periods diverge significantly are suspected to have semantic shifts (Kim et al., 2014; Kulkarni et al., 2015; Hamilton et al., 2016). Neural-DINF first generates static word embeddings in each time period by using Word2Vec (Mikolov et al., 2013b,a) independently, then aligns embeddings to the same vector space with an unsupervised method, subsequently calculates differences of the embeddings of many words across time to identify words that experience semantic shifts, finally measures the influence of a document by counting these crucial words. In summary, this paper makes the following main contributions: • We consider both word frequency changes and word semantic shifts on measuring document influence without citations by developing a novel neural network framework. • In the semantic change detection step, we propose an unsupervised method to align word embeddings across time. • Neural-DINF outperforms dynamic topic based models such as DIM, which only considers the word frequency change. This paper is organized as follows: Section 2 states related work; Section 3 formulates our approach; 6005 Section 4 presents our experiments; Section 5 concludes our work. 2 Related work There are two lines of literature that are closely related to our work: document influence evaluation and semantic shift detection. 2.1 Document Influence Evaluation Assessing document influence only based on texts is a challenging task. Garfield et al. (2002) considered that the impact of a journal is based on aggregate citation counts. To identify influential articles without citations, Gerrish and Blei (2010) proposed the document influence model (DIM), which is a probabilistic model based on the dynamic topic model (Blei and Lafferty, 2006). In DIM, they considered the word frequency change and a document whose words can help the way the word frequencies change will have a high influence score. Gerow et al. (2018) improved DIM by incorporating features, such as authorship, affiliation, and publication venue and they aimed to explain how influence arises. In practice, this additional information is not often available. In this paper, we measure document influence from a more fine-grained level by considering word semantic shifts. Our work differs from the above studies by considering both word frequency changes and word semantic shifts. Specially, we aim to find words that present significant changes in their meanings and we think these words contribute significantly to document influence. Neural-DINF assigns influence scores to documents based on how many of these important words are included in these documents. 2.2 Semantic Shift Detection There has been a lot of research on detecting semantic changes across time (Kay, 1979; Traugott, 1989; Blank, 1999; Zhang et al., 2016; Liao and Cheng, 2016; Bamler and Mandt, 2017). In general, most approaches learn individual embeddings for different time slices and recognize the changes by comparing these embeddings. These vectors have to be aligned into the same vector space for comparison. To achieve alignment, Kim et al. (2014) trained word vectors for different years and then initialized the word vectors in subsequent years with the word vectors obtained from the previous years. Kulkarni et al. (2015) and Hamilton et al. (2016) addressed the embedding alignment problem by learning a linear transformation of words between any two time periods. Most of the alignment methods require anchor words whose meaning does not change between the two time slices. However, it is difficult for us to acquire this kind of prior knowledge, which involves additional expert supervision. In this paper, inspired by Conneau et al. (2017), we propose an adversarial network for unsupervised cross-time alignment. Different from existing approaches, our method is unsupervised and does not require expert information. 3 Method Our Neural-DINF contains the following three steps. First, we generate static word embeddings in each time slice separately. Then, we implement an unsupervised approach with adversarial training and a refinement procedure to align these embeddings to the same vector space. Finally, we present a new metric to evaluate the influence of a document without citations. 3.1 Word Embedding Generation Our method first learns individual word embeddings for different time periods and any reasonable word embedding generation approach can be used for this purpose. We consider a text corpus collected across time and use the texts of the documents to train word embeddings. We define our text corpus as D= (D1, . . . , DT ), where each Dt(t = 1, . . . , T) is the texts of all documents in the t-th time slice. The length of these time slices is years in our model. Given any time slice of the texts, our goal is to learn word embeddings through Word2Vec (Mikolov et al., 2013b,a). 3.2 Unsupervised Cross-time Alignment As our word embeddings for different time periods are trained in different vector spaces, we need to align them to the unified vector space for comparison. We aim at learning a mapping between word vectors for two different time periods. Let S′ = {s′ 1, s′ 2, . . . , s′ m} ⊆Rd and S = {s1, s2, . . . , sn} ⊆Rd be two sets of m and n word embeddings from time slices t′ and t respectively where t′ ∈{t+1, . . . , T}. Ideally, we can use a known dictionary including words that do not experience semantic shifts. Then we can learn a linear mapping W between the two embedding 6006 spaces such that: W ∗= arg min W∈Rd×d ∥WX −Y ∥2, (1) where d is the dimension of the embeddings, and X and Y are two aligned matrices of size d×k formed by k word embeddings selected from S′ and S, respectively. During the inference time, the aligned embedding of any word w at time slices t′ is defined as arg maxsj∈T cos(Ws′ w, sj). In this paper, we aim to learn this mapping W without using anchor words, which does not change meaning between the two time slices. We first apply an adversarial network to learn an initial proxy of W, then refine the model by using a synthetic parallel dictionary. Domain-Adversarial Training. We define a discriminator which aims at discriminating between elements randomly samples from WS′ = Ws′ 1, Ws′ 2, . . . , Ws′ m and S. The mapping W can be regarded as a generator, which aims at preventing the discriminator from making accurate predictions. The discriminator is designed to maximize its ability to identify the origin of an embedding, and the generator makes WS′ and S as similar as possible to prevent the discriminator from accurately predicting the embedding origins. We denote the discriminator parameters as θD. Given the mapping W, the optimization objective of the discriminator can be defined as: LD(θD|W) = −1 m m X i=1 log PθD(origin = 1|Ws′ i) −1 n n X j=1 log PθD(origin = 0|sj), (2) where PθD(origin = 1|z) is the probability that z originates from the embedding space at time slice t′ (as opposed to an embedding from the embedding space at time slice t). The mapping W is trained to prevent the discriminator from accurately predicting embedding origins and the optimization objective can be defined as: LW (W|θD) = −1 m m X i=1 log PθD(origin = 0|Ws′ i) −1 n n X j=1 log PθD(origin = 1|sj). (3) According to the standard training process of adversarial networks (Goodfellow et al., 2014), the discriminator θD and the mapping W are consecutively trained to respectively minimize LD and LW . Refinement Procedure. The refinement procedure is designed to improve the performance of alignment after the domain-adversarial training step. We obtain a linear transformation W that maps a word from time slices t′ to t in the last step. To refine our mapping W, we utilize the learned W to build a syntactic parallel dictionary that specifies which s′ i ∈S′ refer to which sj ∈S. Since the most frequent words are suspected to have better embeddings, we consider the most frequent words and keep only their mutual nearest neighbors. In the process of deciding mutual nearest neighbors, we use the Cross-Domain Similarity Local Scaling proposed in (Conneau et al., 2017) to alleviate the hubness problem (Dinu et al., 2014). Consequently, we use Eq. (1) on this obtained dictionary to refine W. To compare vectors from different time periods, we propose an unsupervised approach. An adversarial network is first used to learn an initial proxy of W. To optimize the mapping W, we use a synthetic parallel dictionary in which words’ semantics match the best. 3.3 Influence Evaluation In this section, Neural-DINF evaluates document influence without citations. Our model makes use of both word frequency changes and word semantic shifts to compute an influence score for each document. We quantify the semantic change of the words by calculating the cosine similarity of the embedding vectors for the same words in different years. We represent aligned vectors of the word w in t and t′ as w and w′ respectively. We compute the word meaning shift of w as follows: Vw = 1 −cos⟨w, w′⟩. (4) Given a document d of time slice t, the influence score of this document on the corpus Dt′ can be defined as: It′ d = X w∈Dt,t′∩D Vw · Ct d,w Ctw , (5) where Dt,t′ is the vocabulary consisting of cooccurence words of corpus Dt and Dt′, D is the 6007 vocabulary of document d, Ct d,w represents the frequency of word w in the document d, Ct w represents the frequency of word w in the corpus Dt. The document published at time slice t can only affect documents published after that time slice, so the influence score of document d on the corpus D can be defined as: Id = t′=T X t′=t+1 It′ d . (6) 4 Experiments Similar to previous studies (Gerrish and Blei, 2010; Gerow et al., 2018) on measuring documents’ scholarly impact, we evaluate the performance of Neural-DINF by Pearson correlation and Spearman rank correlation of influence scores and citation counts. We reproduce the DIM (Gerrish and Blei, 2010) as our baseline and its experimental setup is as follows: topics’ Markov chain variance σ2 = 0.005, topic number K = 5, LDA (Blei et al., 2003) hyperparameter α = 0.001 . In Neural-DINF, word embeddings are generated by training on the corpus of each year and word embedding size is 300. We only select the first 10k most frequent words in each year in our experiments. This threshold is determined by the size of the smallest vocabulary in the years (2002-2013). In the unsupervised alignment, we use the default setting specified in (Conneau et al., 2017) to build a discriminator and the dimension of W is 300×300. Stochastic gradient descent(SGD) is used to train the discriminator and W with the learning rate of 0.1. We only feed the discriminator with 3000 most frequent words. This is because the embeddings of rare words are of low quality (Luong et al., 2013), which makes them harder to align. It is observed that feeding the discriminator with rare words had a small negative impact which cannot be ignored. In the refinement procedure, we retain the same setting presented in (Conneau et al., 2017). 4.1 Data For evaluation, we analyze a sequential corpus The Association for Computational Linguistics Anthology (ACL Anthology), which is a collection of documents on the study of computational linguistics and natural language processing (Bird et al., 2008). Following the experimental setup in DIM, we only use the texts and dates of this corpus. We analyze a subsample from ACL Anthology, spanning from 2002 to 2013, which contains 11106 articles and 18960 unique tokens after preprocessing. We remove short documents and words that have low frequency and low TF-IDF value. Citation counts of articles are obtained from ACL Anthology Network (Joseph and Radev, 2007; Leskovec et al., 2009; Radev et al., 2013). 4.2 Result We compare the correlation coefficient scores on DIM and Neural-DINF in Table 1. The Pearson correlation computed by Neural-DINF and DIM is 0.186 and 0.118 respectively. The Spearman rank correlation computed by Neural-DINF and DIM is 0.249 and 0.102 respectively. The results show that our model outperforms the DIM. Method Pearson correlation Spearman rank correlation DIM 0.118 0.102 NeuralDINF 0.186 0.249 Table 1: Pearson correlation and Spearman rank correlation between citation counts and the influence score. We also visualize the performances of DIM and our Neural-DINF to validate the effectiveness of our proposed model. As shown in Figure 1, for ACL documents with the highest 60% of influence scores. Neural-DINF covers 83% of citations, which outperforms DIM (68%) by a large marge. Figure 1: Fraction of citations explained by influence scores. In fact, the qualitative analysis does present some evidence that in many cases the Neural-DINF is a better model to produce reasonable scores for the most-cited papers in the used datasets. For 6008 example, A Systematic Comparison of Various Statistical Alignment Models (Och and Ney, 2003) is a top-cited article (citation ranking 3) in the dataset. This article receives a very high score both on the DIM and the Neural-DINF. However, the result of Neural-DINF ranking (31) is more close to its citation ranking than the DIM (236). Moreover, in some cases, only Neural-DINF can produce the correct score. For example, DIM assigns a relatively low influence score to (Collins, 2002) (citation ranking 9) in our dataset and ranks this article 11,106 out of 11,106 articles, while the NeuralDINF gives a relatively reasonable score to this article, ranking it 1,199 out of 11,106 articles. 5 Conclusion In this paper, we aim to evaluate document influence from a fine-grained level by additionally considering word semantic shifts. For our purpose, we develop Neural-DINF which measures document influence from the texts of documents. Besides, we propose an unsupervised method to address the alignment problem. The document receives an influence score based on how it explains the word frequency change and the word semantic shift. Our experimental results show that our model performs better than the DIM on ACL Anthology. Acknowledgments This work has been supported in part by National Key Research and Development Program of China (2018AAA010010), NSFC (No.61751209, U1611461), University-Tongdun Technology Joint Laboratory of Artificial Intelligence, Zhejiang University iFLYTEK Joint Research Center, Chinese Knowledge Center of Engineering Science and Technology (CKCEST), China Engineering Expert Tank, Engineering Research Center of Digital Library, Ministry of Education, the Fundamental Research Funds for the Central Universities. References Robert Bamler and Stephan Mandt. 2017. Dynamic word embeddings. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 380–389. JMLR. org. Steven Bird, Robert Dale, Bonnie J Dorr, Bryan Gibson, Mark Thomas Joseph, Min-Yen Kan, Dongwon Lee, Brett Powley, Dragomir R Radev, and Yee Fan Tan. 2008. The acl anthology reference corpus: A reference dataset for bibliographic research in computational linguistics. Andreas Blank. 1999. Why do new meanings occur? a cognitive typology of the motivations for lexical semantic change. Historical semantics and cognition, 13:6. David M Blei and John D Lafferty. 2006. Dynamic topic models. In Proceedings of the 23rd international conference on Machine learning, pages 113– 120. ACM. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 1–8. Association for Computational Linguistics. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087. Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2014. Improving zero-shot learning by mitigating the hubness problem. arXiv preprint arXiv:1412.6568. Eugene Garfield, Alexander I Pudovkin, and VS Istomin. 2002. Algorithmic citation-linked historiography—mapping the literature of science. Proceedings of the American Society for Information Science and Technology, 39(1):14–24. Aaron Gerow, Yuening Hu, Jordan Boyd-Graber, David M Blei, and James A Evans. 2018. Measuring discursive influence across scholarship. Proceedings of the national academy of sciences, 115(13):3308–3313. Sean Gerrish and David M Blei. 2010. A languagebased approach to measuring scholarly impact. In ICML, volume 10, pages 375–382. Citeseer. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc. William L Hamilton, Jure Leskovec, and Dan Jurafsky. 2016. Diachronic word embeddings reveal statistical laws of semantic change. arXiv preprint arXiv:1605.09096. Mark T Joseph and Dragomir R Radev. 2007. Citation analysis, centrality, and the acl anthology. Technical report, Citeseer. 6009 Margarita Kay. 1979. Lexemic change and semantic shift in disease names. Culture, medicine and psychiatry, 3(1):73–94. Yoon Kim, Yi-I Chiu, Kentaro Hanaki, Darshan Hegde, and Slav Petrov. 2014. Temporal analysis of language through neural language models. arXiv preprint arXiv:1405.3515. Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2015. Statistically significant detection of linguistic change. In Proceedings of the 24th International Conference on World Wide Web, pages 625–635. International World Wide Web Conferences Steering Committee. Jure Leskovec, Lars Backstrom, and Jon Kleinberg. 2009. Meme-tracking and the dynamics of the news cycle. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 497–506. ACM. Xuanyi Liao and Guang Cheng. 2016. Analysing the semantic change based on word embedding. In Natural language understanding and intelligent applications, pages 213–223. Springer. Minh-Thang Luong, Richard Socher, and Christopher D Manning. 2013. Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 104– 113. M. H. Macroberts and B. R. Macroberts. 2010. Problems of citation analysis: A study of uncited and seldom-cited influences. Journal of the Association for Information Science & Technology, 61(1):1–12. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Dragomir R Radev, Pradeep Muthukrishnan, Vahed Qazvinian, and Amjad Abu-Jbara. 2013. The acl anthology network corpus. Language Resources and Evaluation, 47(4):919–944. Michael J. Stringer, Sales Pardo Marta, Amaral Lu´ıs A. Nunes, and Scalas Enrico. 2008. Effectiveness of journal ranking schemes as a tool for locating information. Plos One, 3(2):e1683–. Elizabeth Closs Traugott. 1989. On the rise of epistemic meanings in english: An example of subjectification in semantic change. Language, pages 31–55. Yating Zhang, Adam Jatowt, Sourav S Bhowmick, and Katsumi Tanaka. 2016. The past is not a foreign country: Detecting semantically similar terms across time. IEEE Transactions on Knowledge and Data Engineering, 28(10):2793–2807.
2020
534
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6010–6021 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6010 Paraphrase Generation by Learning How to Edit from Samples Amirhossein Kazemnejad†, Mohammadreza Salehi‡, Mahdieh Soleymani Baghshah‡ †Iran University of Science and Technology, ‡Sharif University of Technology a [email protected], [email protected], [email protected] Abstract Neural sequence to sequence text generation has been proved to be a viable approach to paraphrase generation. Despite promising results, paraphrases generated by these models mostly suffer from lack of quality and diversity. To address these problems, we propose a novel retrieval-based method for paraphrase generation. Our model first retrieves a paraphrase pair similar to the input sentence from a pre-defined index. With its novel editor module, the model then paraphrases the input sequence by editing it using the extracted relations between the retrieved pair of sentences. In order to have fine-grained control over the editing process, our model uses the newly introduced concept of Micro Edit Vectors. It both extracts and exploits these vectors using the attention mechanism in the Transformer architecture. Experimental results show the superiority of our paraphrase generation method in terms of both automatic metrics, and human evaluation of relevance, grammaticality, and diversity of generated paraphrases. 1 Introduction Paraphrases are texts conveying the same meaning while using different words (Bhagat and Hovy, 2013). Paraphrase generation is an important task in Natural Language Processing (NLP) that has many applications in other down-stream tasks, such as text summarization, question answering, semantic parsing, and information retrieval (Cao et al., 2017; Fader et al., 2014; Berant and Liang, 2014). Early works on paraphrasing mostly investigated rule-based or statistical machine translation approaches to this task (Bannard and CallisonBurch, 2005). With the recent advances of neural sequence-to-sequence (Seq2Seq) framework in different NLP tasks, especially in machine translation, an increasing amount of literature have also applied Retreiver Training Corpus Step 1: Find the most similar pair Edit Provider Edit Performer Step 2: Generate the paraphrase Editor How can I increase my presence of mind ? What is best way to increase presence of mind ? How can I overcome absence of mind ? what is the best way to overcome absence of mind ? Figure 1: An overview of the proposed model. This model retrieves the most similar paraphrase pair to the input x from the training corpus (Retriever), computes a set of edit vectors [M, z] based on the retrieved pair (Edit Provider), and applies these edits to the input sequence x to generate its paraphrase (Edit Performer). Seq2Seq models to the task of paraphrase generation (Prakash et al., 2016; Gupta et al., 2018; Li et al., 2018). Although the proposed Seq2Seq methods for paraphrase generation have shown promising results, they are not yet as dominant as their counterparts used in neural machine translation. The main reason is that the available training data for paraphrasing is scarce and domain-specific (Wang et al., 2019). In fact, the necessity to generate sequences from scratch, which is a major drawback of traditional Seq2Seq models (Guu et al., 2018), magnifies itself when dealing with scarce training data. Thus, one can expect that the model would not be trained well and consequently, would not be able to generate diverse outputs. Although retrieval-based text generation has 6011 been evaluated recently in Guu et al. (2018); Hashimoto et al. (2018); Wu et al. (2019) as a remedy for this problem, to the best of our knowledge, there is no previous study exploring the usage of this approach in paraphrase generation. Moreover, none of the existing works in the realm of retrieval text generation, such as Guu et al. (2018); Wu et al. (2019); Hashimoto et al. (2018), focuses on learning how to extract edits from the retrieved sentences. Indeed, Guu et al. (2018); Wu et al. (2019) computes a single edit vector heuristically through concatenating the weighted sum of the inserted word embeddings and the weighted sum of deleted word embeddings. Moreover, Hashimoto et al. (2018) only focuses on improving the retrieving stage and uses a standard Seq2Seq model to edit the retrieved sentence. In this paper, we present an effective retrievalbased approach to paraphrase generation by proposing a novel editor module. Our method can be summarized as follows: Given an input sentence x, the model first retrieves a similar sentence p and its associated paraphrase q from the training data. Then, by getting x and (p, q), the editor both learns how to extract the fine-grained relations between p and q as a set of edits, and also when and how to use these extracted edits to paraphrase x. By incorporating the retrieved pairs into the editing process, we invigorate our model with a non-parametric memory, which enables it to produce non-generic and more diverse outputs. Both the retriever and editor components of our method are modeled by deep neural networks. We employ the Transformer architecture (Vaswani et al., 2017) as the backbone of our model, and use its attention mechanism as an effective tool to apply edits in a selective manner. Our main contributions are: • We propose the Fine-grained Sample-based Editing Transformer (FSET) model. It contains a novel editor that can be used in a retrieval-based framework for paraphrase generation. This editor learns how to discover the relationship between a pair of paraphrase sentences as a set of edits, and transforms the input sentence according to these edits. It is worth noting that the set of edits is learned in an end-to-end manner as opposed to Guu et al. (2018); Wu et al. (2019) that compute the edit vector heuristically. • For the first time, we utilize the Transformer as an efficient fully-attentional architecture for the task of retrieval-based text generation. • Experimentally, we compare our method with the recent paraphrase generation methods, and also with the retrieval-based text generation methods that have been introduced recently. Both of the quantitative and qualitative results show the superiority of our model. 2 Related Work 2.1 Neural paraphrase generation Prakash et al. (2016) was the first work that adapted a neural approach to paraphrase generation with a residual stacked LSTM network. Gupta et al. (2018) combined a variational auto-encoder with a Seq2Seq LSTM model to generate multiple paraphrases for a given sentence. Li et al. (2018) proposed a model in which a generator is first trained on the paraphrasing dataset, and then is fine-tuned by using reinforcement learning techniques. Cao et al. (2017) utilized separate decoders for copying and rewriting as the two main writing modes in paraphrasing. Mallinson et al. (2017) addressed paraphrasing with bilingual pivoting on multiple languages in order to better capture different aspects of the source sentence. Iyyer et al. (2018) proposed a method to generate syntactically controlled paraphrases and use them as adversarial examples. Chen et al. (2019) addressed the same problem, but the syntax is controlled by a sentence exemplar. Kajiwara (2019) proposed a model that first identifies a set of words to be paraphrased, and then generates the output by using a pre-trained paraphrase generation model. Wang et al. (2019) proposed a Transformer-based model that utilizes structured semantic knowledge to improve the quality of paraphrases. Kumar et al. (2019) modified the beam search algorithm with a sub-modular objective function to make the generated set of paraphrases syntactically diverse. Li et al. (2019) decomposed paraphrasing into sentential and phrasal levels and employed separate Transformer-based models for each of these levels. Fu et al. (2019) decomposes paraphrasing into two steps: content planning and surface realization, and improves the interpretability of the first step by incorporating a latent bag of words model. 2.2 Retrieval-based text generation Retrieval-based text generation has received much attention in the last few years. Song et al. (2016); 6012 Wu et al. (2019) augmented Seq2Seq generationbased models with retrieval frameworks to make the dialog responses more meaningful and nongeneric. Gu et al. (2017) utilized a search engine to retrieve a set of source-translation pairs from the training corpus, both at train and test time, and use them as a guide to translate an input query. Guu et al. (2018) proposed the neural editor model for unconditional text generation, which produces a new sentence by editing a retrieved prototype using an edit vector. Hashimoto et al. (2018) proposed a task-specific retriever using the variational framework to generate complex structured outputs, such as Python code. This work, however, does not have any novelty in the editor’s architecture and uses a standard Seq2Seq model with attention and copy mechanism (Hashimoto et al., 2018). 3 Proposed Approach Let D = {xn, yn}N n=1 denotes a dataset where xn is a sequence of words, and yn is its target paraphrase. In the paraphrasing task, our goal is to find the set of parameters of the model that maximizes QN n=1 pmodel(yn|xn). Figure 1 illustrates the overview of our proposed model which is composed of a Retriever and an Editor. Given an input sequence x, the retriever first finds a paraphrase pair (p, q) from the training corpus based on similarity of x and p. Then, the editor utilizes the retrieved pair (p, q) to paraphrase x. We discuss the details in the following subsections. 3.1 Retriever The goal of the retriever module is to select the paraphrase pairs (from the training corpus) that are similar to the input sequence x. To do that, the retriever finds a neighborhood set N(x) consisting of the K most similar source sentences {pk}K k=1 to x and their associated paraphrases {qk}K k=1 (K is a hyper-parameter of the model). To measure similarity of sentences, we first embed them employing the pre-trained transformer-based sentence encoder proposed by Cer et al. (2018). The similarity is then calculated using cosine similarity measure in the resulted embedding space. We call this retriever as General Retriever throughout the paper. Note that using a pre-trained retriever can help us to alleviate the scarcity problem of the training data available for paraphrasing1. 1Pre-trained model is available at https://tfhub.dev/google/universal-sentence-encoder-large/3 In order to search for the similar sentences to an input sequence efficiently, we use the FAISS software package (Johnson et al., 2019) to create a fast search index from the sentences in the training corpus. We would also pre-compute the neighborhood set of each source sentence in the training set, so at the training time, our model just needs to sample one of the pairs in the neighborhood set uniformly and feed it as an input to the editor module. The probability of retrieving a pair can thus be stated as p((p, q)|x) = 1 K 1[(p, q) ∈N(x)]. (1) Note that the same procedure also holds for the test time, and the retriever computes N(x) so the model can sample any one of the pairs in N(x) to generate the output based on that pair. 3.2 Editor To edit a sentence according to a retrieved pair, we propose an editor module consisting of two components: 1) Edit Provider and 2) Edit Performer. The Edit Provider computes a set of edit vectors based on the retrieved pair of sentences (p, q). After that, the Edit Performer rephrases the input sequence x by utilizing this prepared set of edits. 3.2.1 Edit Provider This part of the editor extracts the edits from the retrieved pair as a set of vectors which we call Micro Edit Vectors (MEVs). MEVs are responsible for encoding the information about fine-grained edits that transform p into q. Each one of the MEVs represents the most plausible soft alignment between a token in p and the semantically relevant parts in q: M = {mi := small edit applied on pi|1 ≤i ≤l} where l is the length of p. avoid how can one overcome procrastination ? how should i avoid procrastination ? Neural Network Step 1: Step 2: (overcome avoid) Compute edit Find the most similar in target Figure 2: The general scheme of computing a MEV corresponding to a token of p. Figure 2 presents, in schematic form, the procedure of computing one MEV. For each arbitrary 6013 token of p, such as pi, we intend to compute a MEV that encodes the edit corresponding to pi using attention over q. Then, given pi as the source of the edit, and the attention’s result as the target, we concatenate their representations and feed it as the input to a neural network, which calculates mi as the corresponding edit vector. To make this process differentiable and parallelizable, we use a fully-attentional architecture consisting of two main sub-modules: 1) Edit Encoder and 2) Target Encoder. Figure 3 shows the overview of the Edit Provider. In this model, at first, a context-aware representation Rq = [r1 q, ..., rk q ] of the sequence q is computed using the Target Encoder which is the encoder sub-graph of the Transformer architecture (Vaswani et al., 2017). The Edit Encoder is also the encoder of the Transformer model, but, with an extra multi-head attention over Rq. This module outputs a vector that encodes the most semantically relevant parts of q to pi. After that, the MEVs, i.e. mis, are computed by feeding these vectors one by one into a single dense layer (with the tanh(.) activation function). By setting the output dimension of the dense layer to be smaller than the dimension of the word embeddings, we introduce a bottleneck, which hinders the Edit Encoder from copying q directly. Dense Dense … … … Edit Encoder Target Encoder … Figure 3: Architecture of Edit Provider. The Edit Encoder uses multi-head attention on Rq to select the target of edit for each token of p. Note that by prepending [AGR] to p, we can encode all of the MEVS into a single edit vector zp→q. Finally, all of the MEVs are aggregated into a single vector z by leveraging a technique inspired by Devlin et al. (2019); we prepend a special token [AGR] to p in order to encode all the edits into a single vector zp→q. The intuition behind encoding into a single vector zp→q is to allow the model learn a global edit that can be applied to the whole sentence, in addition to the MEVs as local edits. We run the Edit Performer with the same parameters in the reverse direction, i.e. from q to p, to Self Attention Multi-Head Att on Input Multi-Head Att on MEVs Feed forward MEVs … Contextual Reprsentation … Decoder Output at (t-1) Decoder Output at (t) Edit Vector ( ) ; Input Sequence Encoder Figure 4: Illustration of the Edit Performer generating the output token at t-th time step. Note that only one layer of the decoder is depicted and the layernorms are not shown for simplicity. compute Rp and zq→p. The final edit vector z is then computed as z = Linear(zp→q ⊕zq→p), where Linear denotes a dense layer without activation and bias. 3.2.2 Edit Performer The Edit Performer transforms the input sequence x = [x1, ..., xs] to the final output ˆy using the edit vectors. We employ a fully-attentional Seq2Seq architecture composed of an encoder and a decoder for this part of the model. The encoder of the Edit Performer has exactly the same architecture as the original encoder of the Transformer model and outputs a context-aware representation Rx = {ri x}s i=1 of the input sequence. For the decoder, we use a slightly modified version of the original Transformer’s decoder. Indeed, the Transformer learns to model p(y|x), while we would like to model a conditional setting p(y|x, (p, q)). Moreover, as mentioned in the description of the Edit Provider, the relation between p and q is encoded in MEVs M and the vector z. Therefore, in order to edit x, instead of using (p, q) directly, we only need M and z to specify the edits, and the sentence p to identify the locations in x to which the edits should be applied. Thus, we aim to model p(y|x, p, M, z) with the Edit Performer. Figure 4 depicts the architecture of the Edit Performer. To condition the generation process on 6014 the edit vector z, we append it to each token of the decoder’s input. To apply the edits in a finegrained manner, we would like the model to attend to the most similar token of p and select the corresponding edit in MEVs M to be applied to the input sentence. Therefore, in addition to the input sequence representation Rx, the model also attends to MEVs M using an extra multi-head attention sub-layer which computes the representation h′ = MultiHeadAtt(Q: h, K: Rp, V: M), where h comes from the previous sub-layer and Rp is the context-aware representation of the retrieved sequence p, which is calculated by the Edit Provider. Hence, this sub-layer allows the model to apply edits only when the current context matches somewhere in p. Finally, we project h′ (after applying the residual connection and the layernorm) using a fully-connected sub-layer and feed it to the above layer. For the last layer, a softmax activation is employed to predict the next token of the output. 3.3 Training During the training phase, our aim is to maximize the log likelihood objective L = X (x,y)∈D log p(y|x). (2) As we decompose the training procedure to two stages of retrieving and editing, we can rewrite p(y|x) as p(y|x) = X (p,q)∈D p(y|x, (p, q))p((p, q)|x). (3) Substituting Eq. 1 into Eq. 3 and then inserting the resulted p(y|x) into Eq. 2 yields the following formulation for the log likelihood: L = X (x,y)∈D log( 1 K X (p,q)∈N(x) p(y|x, (p, q))). We train our model by maximizing the following lower bound of the log likelihood (obtained by Jensen’s inequality): L ≥L′ = 1 K X (x,y)∈D X (p,q)∈N(x) log p(y|x, (p, q)). Note that p(y|x, (p, q)) = pθ(y|x, p, mφ(p, q), zφ(p, q)), where θ denotes the parameters of the Edit Performer and φ shows the parameters of the Edit Provider. Thus, we solve the following optimization problem: θ∗, φ∗= argmax θ,φ L′(θ, φ). Except for the retriever which is a pre-trained component of our model, other components are fully coupled and trained together. To prevent the model from ignoring the information coming from the retrieval pathway during the training procedure (i.e. ignoring the edit vectors extracted from the retrieved pair), we use a simple yet effective trick; we manually add extra (x, y) pairs to N(x) proportionate to the number of retrieved pairs K so the presence of y as the exact ground-truth paraphrase encourages the model to use the retrieved pairs more. Please refer to A.1 for further details. 4 Experiments In this section, we empirically evaluate the performance of our proposed method in the task of paraphrase generation, and compare it with various other methods, including previous state-of-the-art paraphrasing models. 4.1 Datasets We conduct experiments on two of the most frequently used datasets for paraphrase generation: the Quora question pair dataset and the Twitter URL paraphrasing corpus. For the Quora dataset, we only consider the paraphrase pairs. Similar to Li et al. (2018), we sample 100k, 30k, 3k instances for train, test, and validation sets, respectively. Twitter URL paraphrasing dataset consists of two subsets, one is labeled by human annotators, and the other is labeled automatically, thus, it is noisier compared to the Quora dataset. Similar to Li et al. (2018), we sample 110k instances from automatically labeled part as our training set and two non-overlapping subsets of 5k and 1k instances from the part annotated by humans for the test and validation sets, respectively. As in Li et al. (2018, 2019), we truncate sentences in both of the datasets to 20 tokens. Hyperparameter Edit Performer Edit Provider Hidden dimension 64 64 # Layers 6 4 # Heads 8 4 MEV dimension mi 40 Edit vector z dimension 64 Table 1: Settings of the Model 6015 4.2 Baselines We compare our method with both the existing paraphrasing methods that are not retrieval-based, and also with the existing or newly created retrievalbased text generation methods which we adapt for paraphrasing: • Non-retrieval paraphrasing methods: – Residual LSTM (Prakash et al., 2016) which is the first Seq2Seq model proposed for paraphrase generation, – RbM (Li et al., 2018) that fine-tunes a paraphrase generation model using reinforcement learning, – Transformer (Vaswani et al., 2017) which is a Seq2Seq model relying entirely on attention mechanism, – DNPG (Li et al., 2019) that decomposes paraphrasing to sentential and phrasal levels and utilizes separate Transformers for each level, – DiPS (Kumar et al., 2019) which aims to generate diverse paraphrases by adopting a novel approach in the decoding stage instead of beam search. The latter two of the above list have been reported as the state-of-the-art models in paraphrase generation (Kumar et al., 2019; Li et al., 2019). • Retrieval-based models: We compare our method with one existing retrieval-based text generation model and two other combinational methods that we create by ourselves: – Seq2Seq+Ret which is an extended version of Seq2Seq Residual LSTM. This model conditions the generation process at each time step on an edit vector encoding the differences between the retrieved sentences p and q. To make the comparison fair, we use the General Retriever (introduced in the Retriever subsection of the Proposed Approach Section) to find (p, q). The edit vector for this pair is also computed by concatenating the sum of inserted word embeddings with the sum of deleted word embeddings as it is stated by Guu et al. (2018). – RaE that is proposed by Hashimoto et al. (2018) as a method with an in-domain retriever. The editor of this model is a Seq2Seq LSTM equipped with attention mechanism over the input x, and copy mechanism over the retrieved pair p and q. – CopyEditor+Ret which is composed of the editor of Hashimoto et al. (2018), and the General Retriever. We compare FSET with this baseline model to further evaluate the role of our proposed editor. 4.3 Experimental settings Table 1 shows the settings of our model. We select the hyperparameters suggested by Li et al. (2018) for the LSTM-based Seq2Seq baselines, and the hyperparameters mentioned by Li et al. (2019) for the Transformer-based baselines. It is worth noting that our model’s size w.r.t. the number of parameters is approximately 1 2 of the baseline LSTM’s size and 1 5 of the baseline Transformer’s size. The newly created retrieval-based baselines have the same hidden size and the same number of layers as the non-retrieval models. For the Seq2Seq+Ret model, we keep the ratio of hidden size to the edit vector dimension same as the reported ratio in Guu et al. (2018). We train all of the models for 100k iterations, and choose the best version based on their validation loss after training. We set the batch size to 128 and the vocabulary size to 8k in all of the experiments. The embeddings are also trained from scratch. In all of the experiments on the retrievalbased methods, the hyper-parameter K is set to 1. However, results for different values of K are also reported in A.2. During the decoding stage, we use beam search to generate a set of outputs. In order to select the final output, an approach similar to Gupta et al. (2018) is used which chooses the most lexically similar sentence to the input where the similarity is calculated based on the Jaccard measure. 4.4 Results and analysis We compare different methods using BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and METEOR (Banerjee and Lavie, 2005) as the most common metrics for automatic evaluation of paraphrase generation methods. Table 2 summarizes the results of different methods. These results indicate that our model outperforms the previous state-ofthe-art models in terms of all of the metrics. It is worth noting that the models which have utilized copy mechanism, such as DNPG, RbM, RaE, and CopyEditor+Ret, generally outperform the other baselines. The Seq2Seq+Ret, i.e. the 6016 Quora Twitter URL Paraphrasing Models ROUGE-2 ROUGE-1 BLEU-4 BLEU-2 METEOR ROUGE-2 ROUGE-1 BLEU-4 BLEU-2 METEOR Residual LSTM (Prakash et al., 2016) 32.71 59.69 24.56 38.52 29.39 27.94 41.77 25.92 32.13 24.88 Seq2Seq+Ret (Ours) 32.71 60.83 25.23 42.71 32.51 21.56 40.18 20.11 31.58 22.38 DiPS (Kumar et al., 2019) 31.77 59.79 25.37 40.35 29.28 23.67 43.64 27.66 37.92 25.69 Transformer (Vaswani et al., 2017) 34.23 61.25 30.38 42.91 34.65 29.55 44.53 32.14 40.34 28.26 DNPG (Li et al., 2019) 2 37.75 63.73 25.03 RbM (Li et al., 2018) 2 38.11 64.39 43.54 32.84 24.23 41.87 44.67 19.97 RaE (Hashimoto et al., 2018) 35.07 62.71 29.22 46.21 29.92 31.53 47.55 34.16 44.33 30.09 CopyEditor+Ret (Ours) 35.59 62.93 29.78 46.55 35.56 27.35 45.54 28.06 40.30 26.93 FSET (Ours) 39.55 66.17 33.46 51.03 38.57 32.04 49.53 34.62 46.35 31.67 Table 2: Results of the different models on two paraphrasing datasets. retrieval-based Residual LSTM, shows an improvement over Residual LSTM on Quora dataset. However, this is not the case on the Twitter dataset and we hypothesize that it is due to uncommon texts in this corpus (i.e. informal text with hashtags and abbreviated words), on which the General Retriever has not been trained. Therefore, a pretrained retriever cannot help in this case. The CopyEditor+Ret model which incorporates a more powerful editor than Seq2Seq+Ret shows better results than both of the Residual LSTM and Seq2Seq+Ret. However, a phenomenon similar to what was stated for Seq2Seq+Ret is also observed for this model on the Twitter dataset. The RaE model with the same editor as CopyEditor but with a supervised (task-specific) retriever leads to near state-of-theart results. This indicates the role of the supervised task-specific retriever used in RaE, especially in the results on Twitter dataset. The superiority of our method over RaE in all of the metrics could be a sign of the effectiveness of our proposed editor module. Although our model uses the General Retriever, it still outperforms all other methods even on the Twitter dataset. It is worth mentioning that we can replace the General Retriever in our method with other retrievers like supervised task-specific ones to improve the results even more. Moreover, it is worth noting that our model that is only based on the Transformer architecture and the General Retriever (that is not required to be trained in each domain) needs much less training time than RaE. 4.5 Human evaluation As there is no appropriate automatic metric for evaluating the diversity and novelty of generated sentences, we use human evaluation to assess the performance of our model qualitatively. We 2Results are directly reported from Li et al. (2018, 2019) on the same dataset and settings. Grammar Coherency Models Score κ Score κ DiPS (Kumar et al., 2019) 3.97 0.253 2.55 0.476 RaE (Hashimoto et al., 2018) 4.70 0.286 3.90 0.483 FSET (Ours) 4.70 0.394 4.22 0.528 Table 3: Human evaluation on Quora dataset. Tie: 23.0% 0.373 RaE: 20.6% FSET (Ours): 57.0% FSET Tie RaE Tie: 13.3% 0.430 DiPS: 11.3% FSET (Ours): 75.3% FSET Tie DiPS Tie: 28% 0.331 DiPS: 15.3% RaE: 56.6% RaE Tie DiPS FSET vs. RaE FSET vs. DiPS RaE vs. DiPS Figure 5: Results of the one-on-one human evaluation (second experiment). Annotators decide ”Tie” when the outputs of the two models have the same quality in their opinion. compare our method with two other methods: 1) RaE (Hashimoto et al., 2018) as a retrieval-based method adapted for paraphrasing, and 2) DiPS (Kumar et al., 2019) as a paraphrasing model which generates semantically diverse outputs by adopting a novel approach instead of beam search during the decoding stage. We choose these models as we would like to compare our method both with a state-of-the-art retrieval-based method and with a method that can generate diverse outputs. It must be noted that many of the recent methods in Table 2 are not able to generate diverse outputs. We first select 100 sentences randomly from the test set of Quora dataset. Then, for each model, three paraphrases are generated for each one of the sentences, and these three outputs are considered as a paraphrase group. We aggregate and shuffle these paraphrase groups and ask six human annotators to 6017 evaluate them in two scenarios. In the first scenario, we ask the human annotators to score the outputs individually based on the following two criteria: 1) Grammar and fluency, 2) Consistency and coherency. Similar to Li et al. (2018), we use a 5-scale rating for each criterion. Table 3 presents the results. As can be seen, our model generally outperforms the other methods. Although RaE and our model can both produce grammatically correct outputs, the consistency and coherency for the outputs of our method is much better. Moreover, the inter-annotator agreement measured by Cohen’s kappa κ shows fair or intermediate agreement between raters assessing the models. Since directly scoring diversity and novelty of one paraphrase group is not simple even for humans, in the second scenario, we ask the annotators to make one-on-one comparisons on the groups of generated paraphrases. In other words, for each pair of the models, they have to decide which model produces better outputs for each one of the sentences (Ties are also allowed). Figure 5 depicts the one-on-one diversity evaluation. Our method and RaE both outperform DiPS, probably due to their retrieval-based nature. Moreover, this figure reveals that our method can generate significantly better outputs compared to RaE. We believe the reason is that RaE’s editor is not as properly designed as our editor module. We explicitly inject the paraphrasing patterns found in the neighboring paraphrases into the Edit Performer which helps it to generate more diverse paraphrases. Please refer to the A.3 for some further details on the experiments. 4.6 Case study Table 4 shows some examples of the paraphrases generated by our model. A common pattern among the output paraphrases is that the model has combined different parts of the input sentence and the retrieved target sentence to create a grammatically correct paraphrase. Furthermore, the model is able to change the syntactic form of words correctly, or replace them with their synonyms, as it is the case for ”via” and ”by using”, and ”is” and ”are”. The fact that the model can apply these changes in different levels of granularity can indicate that the idea of editing based on MEVs works properly. (1) x Why do people ask such questions here on Quora which could be easily found on the internet ? p Why do people ask questions on Quora that could simply be googled? q why do people ask questions on quora that are easily answerable via a quick internet search ? y′ why do people ask questions on quora that could be easily answered by using a quick internet search ? (2) x Who can the Democrats effectively run against Trump in 2020 ? p Who will lead Trump ’s movement after the election ? q Who do you think is most likely to lead the Trump movement after 2016 ? y′ who do you think is the most likely democrat to run against trump in 2020 ? (3) x What should I do when someone marks my question as needing improvement? p Even though most of my asked questions are in well written form, they are usually marked as needing improvement. What should I do ? q Most of my questions are marked as needing improvement . Where can I seek professional help ? y′ what should I do if my question is marked as needing improvement ? (4) x What is the best way to make money as a 15 year old ? p What are the best ways to make money as a teenager ? q I ’m a teen , how can I make some money online ? y′ I ’m a 15 year old , how can I make money ? Table 4: Example outputs of the model. Red phrases are extracted from the retrieved target sentence q, and blue parts are extracted from the source sentence x. Phrases that appear with slight changes in semantics or syntax in the output are made bold. The sentences are annotated manually for better readability. Model Variant ROUGE-2 ROUGE-1 BLEU-4 BLEU-2 Jaccard Retriever 38.52 65.47 31.72 48.83 No edit vector z 38.31 65.44 30.40 47.77 No Attention on MEVs M 39.36 65.72 29.73 46.66 Table 5: Ablation study 4.7 Model Ablation In order to further evaluate the role of each module in our model, we train and assess different variants of it where in each variant, a key component has been replaced by an alternative simpler one: • Jaccard Retriever: The retriever of our model is replaced by a simple retriever that selects neighbor sentences using the Jaccard similarity metric. • No edit vector z: A variant in which we do not condition the Transformer in the Edit Performer on the aggregated edit vector z, and edit the source sentence merely based on MEVs. • No Attention on MEVs: In this variant of our model, the Transformer in the Edit Performer is not conditioned on MEVs, and the source sentence is edited based on only z. 6018 We train all of these variants on the Quora paraphrasing dataset. Table 5 shows the results of these models. As it is seen, the model which uses the Jaccard similarity measure performs worse than the original model with the General Retriever. Nonetheless, the results of this version explains that even the combination of our editor module with this simple retriever outperforms previous state-of-theart methods. This indicates that our proposed editor can distinguish whether the extracted edits are plausible enough to be applied to the input sentence. Moreover, the results show that both eliminating z and M from our editor decrease its performance. In other words, both conditioning on z as the aggregated edit at each step of generation and the attention on MEVs M help the proposed editor. 5 Conclusion In this paper, we proposed a retrieval-based paraphrase generation model which includes a novel fully-attentional editor. This editor learns how to extract edits from a paraphrase pair and also when and how to apply these edits to a new input sentence. We also introduced the new idea of Micro Edit Vectors, where each one of these vectors represents a small edit that should be applied to the source sentence to get its paraphrase. We incorporated Transformer modules in our editor and augmented them with attention over Micro Edit Vectors. The proposed model outperforms the previous state-of-the-art paraphrase generation models in terms of both automatic metrics and human evaluation. Moreover, the outputs show that our model is able to produce paraphrases by editing sentences in a fine-grained manner using the idea of MEVs. In future work, we intend to adapt our editor module for other learning tasks with both the structured input and structured output. 6 Acknowledgments We thank anonymous reviewers for their detailed feedback and suggestions. References Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics. Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 597– 604, Ann Arbor, Michigan. Association for Computational Linguistics. Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1415– 1425, Baltimore, Maryland. Association for Computational Linguistics. Rahul Bhagat and Eduard Hovy. 2013. Squibs: What is a paraphrase? Computational Linguistics, 39(3):463–472. Ziqiang Cao, Chuwei Luo, Wenjie Li, and Sujian Li. 2017. Joint copying and restricted generation for paraphrase. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI17, pages 3152–3158. AAAI Press. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder. CoRR, abs/1803.11175. Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019. Controllable paraphrase generation with a syntactic exemplar. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5972–5984, Florence, Italy. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, pages 1156–1165, New York, NY, USA. ACM. Yao Fu, Yansong Feng, and John P Cunningham. 2019. Paraphrase generation with latent bag of words. In Advances in Neural Information Processing Systems 32, pages 13645–13656. Curran Associates, Inc. Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor O. K. Li. 2017. Search engine guided nonparametric neural machine translation. CoRR, abs/1705.07267. 6019 Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2018. A deep generative framework for paraphrase generation. In Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence, AAAI18, pages 5149–5156. AAAI Press. Kelvin Guu, Tatsunori B. Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. Transactions of the Association for Computational Linguistics, 6:437–450. Tatsunori B. Hashimoto, Kelvin Guu, Yonatan Oren, and Percy Liang. 2018. A retrieve-and-edit framework for predicting structured outputs. In Proceedings of the 32Nd International Conference on Neural Information Processing Systems, NIPS’18, pages 10073–10083, USA. Curran Associates Inc. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1875–1885, New Orleans, Louisiana. Association for Computational Linguistics. Jeff Johnson, Matthijs Douze, and Herve Jegou. 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data, page 11. Tomoyuki Kajiwara. 2019. Negative lexically constrained decoding for paraphrase generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6047– 6052, Florence, Italy. Association for Computational Linguistics. Ashutosh Kumar, Satwik Bhattamishra, Manik Bhandari, and Partha Talukdar. 2019. Submodular optimization-based diverse paraphrasing and its effectiveness in data augmentation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3609–3619, Minneapolis, Minnesota. Association for Computational Linguistics. Zichao Li, Xin Jiang, Lifeng Shang, and Hang Li. 2018. Paraphrase generation with deep reinforcement learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3865–3878, Brussels, Belgium. Association for Computational Linguistics. Zichao Li, Xin Jiang, Lifeng Shang, and Qun Liu. 2019. Decomposable neural paraphrase generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3403–3414, Florence, Italy. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 881–893, Valencia, Spain. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Aaditya Prakash, Sadid A. Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual LSTM networks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2923–2934, Osaka, Japan. The COLING 2016 Organizing Committee. Yiping Song, Rui Yan, Xiang Li, Dongyan Zhao, and Ming Zhang. 2016. Two are better than one: An ensemble of retrieval- and generation-based dialog systems. CoRR, abs/1610.07149. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Su Wang, Rahul Gupta, Nancy Chang, and Jason Baldridge. 2019. A task in a suit and a tie: Paraphrase generation with semantic augmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):7176–7183. Yu Wu, Furu Wei, Shaohan Huang, Yunli Wang, Zhoujun Li, and Ming Zhou. 2019. Response generation by context-aware prototype editing. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01):7281–7288. 6020 A Appendix A.1 Construction of N(x) during training For each pair of sentences, such as (x, y), we augment its neighbourhood set N(x) with multiple (x, y) pairs to get the new neighbourhood set N ′(x) = [(p1, q1), ..., (pK, qK), (x, y), ..., (x, y)], where first K pairs are the K-most similar pairs (excluding (x, y) itself), and (x, y) is repeated K′ < K times (K′ is another hyperparameter of the model). Since the model sees the (x, y) pair K′ K+K′ times during training as the retrieved pair, and these particular pairs include the output y themselves, the model is encouraged to use information coming from the retrieved neighboring pairs more often. A.2 Analysis of Varying K We conduct an experiment to evaluate the effect of the hyper-parameter K in the proposed method. For each value of K ∈{1, 3, 5}, we train our model once and obtain its results on the Quora dataset. Then, the value of two quality metrics (i.e. BLUE-2 and ROUGE-2) and two diversity metrics (i.e. SelfBLEU-2 And PINC-4) are computed. Figure 6 summarizes the obtained results. According to this figure, increasing the value of K slightly decreases the quality metrics while highly increases the diversity measures (Note that lower values of SELf-BLEU and higher values of PINC indicate more diversity in the outputs). It shows that incorporating wider neighborhood in the editing process results in more diversity in the paraphrasing made by the editor. A.3 Human Evaluation The form used for the one-by-one experiment contains the following material: Which set of the outputs do you prefer? Please opt based on the following criteria: • Novelty: If any one of the outputs has expressed the semantic content of source sentence in a novel way like what we do as humans (e.g. changing the voice from active to passive or vice versa, using different words, phrases, or sentences that are not present in the source sentence but without changing the meaning considerably). Figure 6: Results of the proposed method with some different values of the hyperparameter K according to four different metrics • Diversity: if the three outputs are not expressed in the same way using the same words. • Quality: If the outputs are paraphrases of the input sentence to a good extent. • Readability: If the outputs are understandable by humans. Note: Please select the option ’both(#1 and #2)’ if you can not decide which one is better. A.4 Example outputs Table 6 shows the paraphrases generated for more sample inputs from the Quora dataset. 6021 Figure 7: The human evaluation form for the first experiment. Figure 8: The human evaluation form for the second experiment. (1) x What are great examples of absent mindedness ? p What are the common characteristics of ignorant people ? q What are the most common characteristics of stupid , ignorant or uneducated people ? y′ what are the most common examples of absent mindedness ? (2) x What should I know about moving from Pennsylvania to South Carolina ? p What are the pros and cons of living in Richmond , VA ? q What is your review of Richmond , VA ? y′ what is your review of moving from pennsylvania to south carolina ? (3) x How does Quora determine how many views an answer gets ? p How does Quora calculate how many views does an answer have ? q How does Quora count the views of my / your answers ? y′ how does quora determine the views of your answers ? (4) x Do you think India should revisit Indus Water Treaty with Pakistan ? p Should India hold a referendum in Kashmir ? q Would India ever consider a referendum for Kashmir independence ? y′ would india ever consider a indus water treaty with pakistan ? (5) x Which is your favourite time of the day ? p What is your favorite month of the year ? q What is your favorite month of the year and why ? y′ what is your favourite time of the day and why ? (6) x How do I homeschool my kids ? p How do I teach my child ? q What are the best ways to teach your child ? y′ what are the best ways to homeschool my child ? (7) x How did Andy Warhol paint Marilyn Monroe ? p Who was Mona Lisa and why was she painted ? q What is story behind mona lisa painting ? y′ what is the story behind andy warhol paint marilyn monroe ? (8) x Can we start a Sainik School in the private sector ? p Is n’t it time for Indian goverment to change its reservation system for jobs and education from caste based to economic based system ? q Is n’t it time India change its reservation policies from implemented on caste basis to based on financial backgrounds ? y′ is it possible to start a sainik school in the private sector ? (9) x Can we start a Sainik School in the private sector ? p Should the Indian education system be revamped ? q Does the Indian education system need to change and why ? y′ does the indian government need to start a sainik school in the private sector ? (10) x What is the best thing any one has ever done for you ? p What is the nicest thing anyone has ever done for you ? q What ’s the nicest thing anyone has ever done for you ? y′ what ’s the best thing that has ever done for you ? (11) x What are some of the things people like or dislike in general ? p What are some things that you hate ? q What are some things that irritate you and that you hate ? y′ what are some things that most people dislike in general ? Table 6: More examples of our model’s outputs
2020
535
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6022–6034 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6022 Emerging Cross-lingual Structure in Pretrained Language Models Alexis Conneau♥∗Shijie Wu♠∗ Haoran Li♥ Luke Zettlemoyer♥ Veselin Stoyanov♥ ♠Department of Computer Science, Johns Hopkins University ♥Facebook AI [email protected], [email protected] {aimeeli,lsz,ves}@fb.com Abstract We study the problem of multilingual masked language modeling, i.e. the training of a single model on concatenated text from multiple languages, and present a detailed study of several factors that influence why these models are so effective for cross-lingual transfer. We show, contrary to what was previously hypothesized, that transfer is possible even when there is no shared vocabulary across the monolingual corpora and also when the text comes from very different domains. The only requirement is that there are some shared parameters in the top layers of the multi-lingual encoder. To better understand this result, we also show that representations from monolingual BERT models in different languages can be aligned post-hoc quite effectively, strongly suggesting that, much like for non-contextual word embeddings, there are universal latent symmetries in the learned embedding spaces. For multilingual masked language modeling, these symmetries are automatically discovered and aligned during the joint training process. 1 Introduction Multilingual language models such as mBERT (Devlin et al., 2019) and XLM (Lample and Conneau, 2019) enable effective cross-lingual transfer — it is possible to learn a model from supervised data in one language and apply it to another with no additional training. Recent work has shown that transfer is effective for a wide range of tasks (Wu and Dredze, 2019; Pires et al., 2019). These work speculates why multilingual pretraining works (e.g. shared vocabulary), but only experiment with a single reference mBERT and is unable to systematically measure these effects. In this paper, we present the first detailed empirical study of the effects of different masked lan∗Equal contribution. Work done while Shijie was interning at Facebook AI. guage modeling (MLM) pretraining regimes on cross-lingual transfer. Our first set of experiments is a detailed ablation study on a range of zero-shot cross-lingual transfer tasks. Much to our surprise, we discover that language universal representations emerge in pretrained models without the requirement of any shared vocabulary or domain similarity, and even when only a subset of the parameters in the joint encoder are shared. In particular, by systematically varying the amount of shared vocabulary between two languages during pretraining, we show that the amount of overlap only accounts for a few points of performance in transfer tasks, much less than might be expected. By sharing parameters alone, pretraining learns to map similar words and sentences to similar hidden representations. To better understand these effects, we also analyze multiple monolingual BERT models trained independently. We find that monolingual models trained in different languages learn representations that align with each other surprisingly well, even though they have no shared parameters. This result closely mirrors the widely observed fact that word embeddings can be effectively aligned across languages (Mikolov et al., 2013). Similar dynamics are at play in MLM pretraining, and at least in part explain why they aligned so well with relatively little parameter tying in our earlier experiments. This type of emergent language universality has interesting theoretical and practical implications. We gain insight into why the models transfer so well and open up new lines of inquiry into what properties emerge in common in these representations. They also suggest it should be possible to adapt pretrained models to new languages with little additional training and it may be possible to better align independently trained representations without having to jointly train on all of the (very large) unlabeled data that could be gathered. For example, concurrent work has shown that a pre6023 trained MLM model can be rapidly fine-tuned to another language (Artetxe et al., 2019). This paper offers the following contributions: • We provide a detailed ablation study on crosslingual representation of bilingual BERT. We show parameter sharing plays the most important role in learning cross-lingual representation, while shared BPE, shared softmax and domain similarity play a minor role. • We demonstrate even without any shared subwords (anchor points) across languages, crosslingual representation can still be learned. With bilingual dictionary, we propose a simple technique to create more anchor points by creating synthetic code-switched corpus, benefiting especially distantly-related languages. • We show monolingual BERTs of different language are similar with each other. Similar to word embeddings (Mikolov et al., 2013), we show monolingual BERT can be easily aligned with linear mapping to produce crosslingual representation space at each level. 2 Background Language Model Pretraining Our work follows in the recent line of language model pretraining. ELMo (Peters et al., 2018) first popularized representation learning from a language model. The representations are used in a transfer learning setup to improve performance on a variety of downstream NLP tasks. Follow-up work by Howard and Ruder (2018); Radford et al. (2018) further improves on this idea by fine-tuning the entire language model. BERT (Devlin et al., 2019) significantly outperforms these methods by introducing a masked-language model and next-sentence prediction objectives combined with a bi-directional transformer model. The multilingual version of BERT (dubbed mBERT) trained on Wikipedia data of over 100 languages obtains strong performance on zeroshot cross-lingual transfer without using any parallel data during training (Wu and Dredze, 2019; Pires et al., 2019). This shows that multilingual representations can emerge from a shared Transformer with a shared subword vocabulary. Crosslingual language model (XLM) pretraining (Lample and Conneau, 2019) was introduced concurrently to mBERT. On top of multilingual masked language models, they investigate an objective based on parallel sentences as an explicit crosslingual signal. XLM shows that cross-lingual language model pretraining leads to a new state of the art on XNLI (Conneau et al., 2018), supervised and unsupervised machine translation (Lample et al., 2018). Other work has shown that mBERT outperforms word embeddings on token-level NLP tasks (Wu and Dredze, 2019), and that adding character-level information (Mulcaire et al., 2019) and using multi-task learning (Huang et al., 2019) can improve cross-lingual performance. Alignment of Word Embeddings Researchers working on word embeddings noticed early that embedding spaces tend to be shaped similarly across different languages (Mikolov et al., 2013). This inspired work in aligning monolingual embeddings. The alignment was done by using a bilingual dictionary to project words that have the same meaning close to each other (Mikolov et al., 2013). This projection aligns the words outside of the dictionary as well due to the similar shapes of the word embedding spaces. Follow-up efforts only required a very small seed dictionary (e.g., only numbers (Artetxe et al., 2017)) or even no dictionary at all (Conneau et al., 2017; Zhang et al., 2017). Other work has pointed out that word embeddings may not be as isomorphic as thought (Søgaard et al., 2018) especially for distantly related language pairs (Patra et al., 2019). Ormazabal et al. (2019) show joint training can lead to more isomorphic word embeddings space. Schuster et al. (2019) showed that ELMo embeddings can be aligned by a linear projection as well. They demonstrate a strong zero-shot crosslingual transfer performance on dependency parsing. Wang et al. (2019) align mBERT representations and evaluate on dependency parsing as well. Neural Network Activation Similarity We hypothesize that similar to word embedding spaces, language-universal structures emerge in pretrained language models. While computing word embedding similarity is relatively straightforward, the same cannot be said for the deep contextualized BERT models that we study. Recent work introduces ways to measure the similarity of neural network activation between different layers and different models (Laakso and Cottrell, 2000; Li et al., 2016; Raghu et al., 2017; Morcos et al., 2018; Wang et al., 2018). For example, Raghu et al. 6024 (2017) use canonical correlation analysis (CCA) and a new method, singular vector canonical correlation analysis (SVCCA), to show that early layers converge faster than upper layers in convolutional neural networks. Kudugunta et al. (2019) use SVCCA to investigate the multilingual representations obtained by the encoder of a massively multilingual neural machine translation system (Aharoni et al., 2019). Kornblith et al. (2019) argues that CCA fails to measure meaningful similarities between representations that have a higher dimension than the number of data points and introduce the centered kernel alignment (CKA) to solve this problem. They successfully use CKA to identify correspondences between activations in networks trained from different initializations. 3 Cross-lingual Pretraining We study a standard multilingual masked language modeling formulation and evaluate performance on several different cross-lingual transfer tasks, as described in this section. 3.1 Multilingual Masked Language Modeling Our multilingual masked language models follow the setup used by both mBERT and XLM. We use the implementation of Lample and Conneau (2019). Specifically, we consider continuous streams of 256 tokens and mask 15% of the input tokens which we replace 80% of the time by a mask token, 10% of the time with the original word, and 10% of the time with a random word. Note the random words could be foreign words. The model is trained to recover the masked tokens from its context (Taylor, 1953). The subword vocabulary and model parameters are shared across languages. Note the model has a softmax prediction layer shared across languages. We use Wikipedia for training data, preprocessed by Moses (Koehn et al., 2007) and Stanford word segmenter (for Chinese only) and BPE (Sennrich et al., 2016) to learn subword vocabulary. During training, we sample a batch of continuous streams of text from one language proportionally to the fraction of sentences in each training corpus, exponentiated to the power 0.7. Pretraining details Each model is a Transformer (Vaswani et al., 2017) with 8 layers, 12 heads and GELU activiation functions (Hendrycks and Gimpel, 2016). The output softmax layer is tied with input embeddings (Press and Wolf, 2017). The embeddings dimension is 768, the hidden dimension of the feed-forward layer is 3072, and dropout is 0.1. We train our models with the Adam optimizer (Kingma and Ba, 2014) and the inverse square root learning rate scheduler of Vaswani et al. (2017) with 10−4 learning rate and 30k linear warmup steps. For each model, we train it with 8 NVIDIA V100 GPUs with 32GB of memory and mixed precision. It takes around 3 days to train one model. We use batch size 96 for each GPU and each epoch contains 200k batches. We stop training at epoch 200 and select the best model based on English dev perplexity for evaluation. 3.2 Cross-lingual Evaluation We consider three NLP tasks to evaluate performance: natural language inference (NLI), named entity recognition (NER) and dependency parsing (Parsing). We adopt the zero-shot cross-lingual transfer setting, where we (1) fine-tune the pretrained model on English and (2) directly transfer the model to target languages. We select the model and tune hyperparameters with the English dev set. We report the result on average of best two set of hyperparameters. Fine-tuning details We fine-tune the model for 10 epochs for NER and Parsing and 200 epochs for NLI. We search the following hyperparameter for NER and Parsing: batch size {16, 32}; learning rate {2e-5, 3e-5, 5e-5}. For XNLI, we search: batch size {4, 8}; encoder learning rate {1.25e-6, 2.5e-6, 5e-6}; classifier learning rate {5e-6, 2.5e-5, 1.25e-4}. We use Adam with fixed learning rate for XNLI and warmup the learning rate for the first 10% batch then decrease linearly to 0 for NER and Parsing. We save checkpoint after each epoch. NLI We use the cross-lingual natural language inference (XNLI) dataset (Conneau et al., 2018). The task-specific layer is a linear mapping to a softmax classifier, which takes the representation of the first token as input. NER We use WikiAnn (Pan et al., 2017), a silver NER dataset built automatically from Wikipedia, for English-Russian and English-French. For English-Chinese, we use CoNLL 2003 English (Tjong Kim Sang and De Meulder, 2003) and a Chinese NER dataset (Levow, 2006), with realigned Chinese NER labels based on the Stanford word segmenter. We model NER as BIO tagging. The task-specific layer is a linear mapping to a softmax 6025 Figure 1: On the impact of anchor points and parameter sharing on the emergence of multilingual representations. We train bilingual masked language models and remove parameter sharing for the embedding layers and first few Transformers layers to probe the impact of anchor points and shared structure on cross-lingual transfer. Figure 2: Probing the layer similarity of monolingual BERT models. We investigate the similarity of separate monolingual BERT models at different levels. We use an orthogonal mapping between the pooled representations of each model. We also quantify the similarity using the centered kernel alignment (CKA) similarity index. classifier, which takes the representation of the first subword of each word as input. We report spanlevel F1. We adopt a simple post-processing heuristic to obtain a valid span, rewriting standalone I-X into B-X and B-X I-Y I-Z into B-Z I-Z I-Z, following the final entity type. We report the span-level F1. Parsing Finally, we use the Universal Dependencies (UD v2.3) (Nivre, 2018) for dependency parsing. We consider the following four treebanks: English-EWT, French-GSD, Russian-GSD, and Chinese-GSD. The task-specific layer is a graphbased parser (Dozat and Manning, 2016), using representations of the first subword of each word as inputs. We measure performance with the labeled attachment score (LAS). 4 Dissecting mBERT/XLM models We hypothesize that the following factors play important roles in what makes multilingual BERT multilingual: domain similarity, shared vocabulary (or anchor points), shared parameters, and language similarity. Without loss of generality, we focus on bilingual MLM. We consider three pairs of languages: English-French, English-Russian, and English-Chinese. 4.1 Domain Similarity Multilingual BERT and XLM are trained on the Wikipedia comparable corpora. Domain similarity has been shown to affect the quality of crosslingual word embeddings (Conneau et al., 2017), but this effect is not well established for masked language models. We consider domain difference by training on Wikipedia for English and a random subset of Common Crawl of the same size for the other languages (Wiki-CC). We also consider a model trained with Wikipedia only (Default) for comparison. The first group in Tab. 1 shows domain mismatch has a relatively modest effect on performance. XNLI and parsing performance drop around 2 points while NER drops over 6 points for all languages on average. One possible reason is that the labeled WikiAnn data for NER consists of Wikipedia text; domain differences between source and target language during pretraining hurt performance more. Indeed for English and Chinese NER, where neither side comes from Wikipedia, performance only drops around 2 points. 4.2 Anchor points Anchor points are identical strings that appear in both languages in the training corpus. Translingual words like DNA or Paris appear in the Wikipedia of many languages with the same meaning. In mBERT, anchor points are naturally preserved due to joint BPE and shared vocabulary across languages. Anchor point existence has been suggested as a key ingredient for effective cross-lingual transfer since they allow the shared encoder to have at least some direct tying of meaning across different languages (Lample and Conneau, 2019; Pires et al., 2019; Wu and Dredze, 2019). However, this effect 6026 40 60 ACC Default Wiki-CC No anchors Default anchors Extra anchors Sep Emb Sep L1-3 Sep L1-6 Sep Emb + L1-3 Sep Emb + L1-6 En-Fr XNLI 0 20 40 60 F1 En-Zh NER 0 20 40 LAS En-Ru Parsing Figure 3: Cross-lingual transfer of bilingual MLM on three tasks and language pairs under different settings. Others tasks and languages pairs follows similar trend. See Tab. 1 for full results. Model Domain BPE Merges Anchors Pts Share Param. Softmax XNLI (Acc) NER (F1) Parsing (LAS) fr ru zh ∆ fr ru zh ∆ fr ru zh ∆ Default Wiki-Wiki 80k all all shared 73.6 68.7 68.3 0.0 79.8 60.9 63.6 0.0 73.2 56.6 28.8 0.0 Domain Similarity (§4.1) Wiki-CC Wiki-CC 74.2 65.8 66.5 -1.4 74.0 49.6 61.9 -6.2 71.3 54.8 25.2 -2.5 Anchor Points (§4.2) No anchors 40k/40k 0 72.1 67.5 67.7 -1.1 74.0 57.9 65.0 -2.4 72.3 56.2 27.4 -0.9 Default anchors 40k/40k 74.0 68.1 68.9 +0.1 76.8 56.3 61.2 -3.3 73.0 57.0 28.3 -0.1 Extra anchors extra 74.0 69.8 72.1 +1.8 76.1 59.7 66.8 -0.5 73.3 56.9 29.2 +0.3 Parameter Sharing (§4.3) Sep Emb 40k/40k 0* Sep Emb lang-specific 72.7 63.6 60.8 -4.5 75.5 57.5 59.0 -4.1 71.7 54.0 27.5 -1.8 Sep L1-3 40k/40k Sep L1-3 72.4 65.0 63.1 -3.4 74.0 53.3 60.8 -5.3 69.7 54.1 26.4 -2.8 Sep L1-6 40k/40k Sep L1-6 61.9 43.6 37.4 -22.6 61.2 23.7 3.1 -38.7 61.7 31.6 12.0 -17.8 Sep Emb + L1-3 40k/40k 0* Sep Emb + L1-3 lang-specific 69.2 61.7 56.4 -7.8 73.8 46.8 53.5 -10.0 68.2 53.6 23.9 -4.3 Sep Emb + L1-6 40k/40k 0* Sep Emb + L1-6 lang-specific 51.6 35.8 34.4 -29.6 56.5 5.4 1.0 -47.1 50.9 6.4 1.5 -33.3 Table 1: Dissecting bilingual MLM based on zero-shot cross-lingual transfer performance. - denote the same as the first row (Default). ∆denote the difference of average task performance between a model and Default. has not been carefully measured. We present a controlled study of the impact of anchor points on cross-lingual transfer performance by varying the amount of shared subword vocabulary across languages. Instead of using a single joint BPE with 80k merges, we use languagespecific BPE with 40k merges for each language. We then build vocabulary by taking the union of the vocabulary of two languages and train a bilingual MLM (Default anchors). To remove anchor points, we add a language prefix to each word in the vocabulary before taking the union. Bilingual MLM (No anchors) trained with such data has no shared vocabulary across languages. However, it still has a single softmax prediction layer shared across languages and tied with input embeddings. As Wu and Dredze (2019) suggest there may also be correlation between cross-lingual performance and anchor points, we additionally increase anchor points by using a bilingual dictionary to create code switch data for training bilingual MLM (Extra anchors). For two languages, ℓ1 and ℓ2, with bilingual dictionary entries dℓ1,ℓ2, we add anchors to the training data as follows. For each training word wℓ1 in the bilingual dictionary, we either leave it as is (70% of the time) or randomly replace it with one of the possible translations from the dictionary (30% of the time). We change at most 15% of the words in a batch and sample word translations from PanLex (Kamholz et al., 2014) bilingual dictionaries, weighted according to their translation quality 1. The second group of Tab. 1 shows cross-lingual transfer performance under the three anchor point conditions. Anchor points have a clear effect on performance and more anchor points help, especially in the less closely related language pairs (e.g. English-Chinese has a larger effect than EnglishFrench with over 3 points improvement on NER and XNLI). However, surprisingly, effective transfer is still possible with no anchor points. Com1Although we only consider pairs of languages, this procedure naturally scales to multiple languages, which could produce larger gains in future work. 6027 paring no anchors and default anchors, the performance of XNLI and parsing drops only around 1 point while NER even improve 1 points averaging over three languages. Overall, these results show that we have previously overestimated the contribution of anchor points during multilingual pretraining. Concurrently, Karthikeyan et al. (2020) similarly find anchor points play minor role in learning cross-lingual representation. 4.3 Parameter sharing Given that anchor points are not required for transfer, a natural next question is the extent to which we need to tie the parameters of the transformer layers. Sharing the parameters of the top layer is necessary to provide shared inputs to the task-specific layer. However, as seen in Figure 1, we can progressively separate the bottom layers 1:3 and 1:6 of the Transformers and/or the embedding layers (including positional embeddings) (Sep Emb; Sep L1-3; Sep L1-6; Sep Emb + L1-3; Sep Emb + L1-6). Since the prediction layer is tied with the embeddings layer, separating the embeddings layer also introduces a language-specific softmax prediction layer for the cloze task. Additionally, we only sample random words within one language during the MLM pretraining. During fine-tuning on the English training set, we freeze the languagespecific layers and only fine-tune the shared layers. The third group in Tab. 1 shows cross-lingual transfer performance under different parameter sharing conditions with “Sep” denote which layers is not shared across languages. Sep Emb (effectively no anchor point) drops more than No anchors with 3 points on XNLI and around 1 point on NER and parsing, suggesting have a cross-language softmax layer also helps to learn cross-lingual representations. Performance degrades as fewer layers are shared for all pairs, and again the less closely related language pairs lose the most. Most notably, the cross-lingual transfer performance drops to random when separating embeddings and bottom 6 layers of the transformer. However, reasonably strong levels of transfer are still possible without tying the bottom three layers. These trends suggest that parameter sharing is the key ingredient that enables the learning of an effective cross-lingual representation space, and having language-specific capacity does not help learn a language-specific encoder for cross-lingual representation. Our hypothesis is that the representations that the models learn for different languages are similarly shaped and models can reduce their capacity budget by aligning representations for text that has similar meaning across languages. 4.4 Language Similarity Finally, in contrast to many of the experiments above, language similarity seems to be quite important for effective transfer. Looking at Tab. 1 column by column in each task, we observe performance drops as language pairs become more distantly related. Using extra anchor points helps to close the gap. However, the more complex tasks seem to have larger performance gaps and having language-specific capacity does not seem to be the solution. Future work could consider scaling the model with more data and cross-lingual signal to close the performance gap. 4.5 Conclusion Summarised by Figure 3, parameter sharing is the most important factor. More anchor points help but anchor points and shared softmax projection parameters are not necessary for effective crosslingual transfer. Joint BPE and domain similarity contribute a little in learning cross-lingual representation. 5 Similarity of BERT Models To better understand the robust transfer effects of the last section, we show that independently trained monolingual BERT models learn representations that are similar across languages, much like the widely observed similarities in word embedding spaces. In this section, we show that independent monolingual BERT models produce highly similar representations when evaluated at the word level (§5.1.1), contextual word-level (§5.1.2), and sentence level (§5.1.3) . We also plot the cross-lingual similarity of neural network activation with center kernel alignment (§5.2) at each layer. We consider five languages: English, French, German, Russian, and Chinese. 5.1 Aligning Monolingual BERTs To measure similarity, we learn an orthogonal mapping using the Procrustes (Smith et al., 2017) approach: W ⋆= argmin W∈Od(R) ∥WX −Y ∥F = UV T 6028 with UΣV T = SVD(Y XT ), where X and Y are representation of two monolingual BERT models, sampled at different granularities as described below. We apply iterative normalization on X and Y before learning the mapping (Zhang et al., 2019). 5.1.1 Word-level alignment In this section, we align both the non-contextual word representations from the embedding layers, and the contextual word representations from the hidden states of the Transformer at each layer. For non-contextualized word embeddings, we define X and Y as the word embedding layers of monolingual BERT, which contain a single embedding per word (type). Note that in this case we only keep words containing only one subword. For contextualized word representations, we first encode 500k sentences in each language. At each layer, and for each word, we collect all contextualized representations of a word in the 500k sentences and average them to get a single embedding. Since BERT operates at the subword level, for one word we consider the average of all its subword embeddings. Eventually, we get one word embedding per layer. We use the MUSE benchmark (Conneau et al., 2017), a bilingual dictionary induction dataset for alignment supervision and evaluate the alignment on word translation retrieval. As a baseline, we use the first 200k embeddings of fastText (Bojanowski et al., 2017) and learn the mapping using the same procedure as §5.1. Note we use a subset of 200k vocabulary of fastText, the same as BERT, to get a comparable number. We retrieve word translation using CSLS (Conneau et al., 2017) with K=10. In Figure 4, we report the alignment results under these two settings. Figure 4a shows that the subword embeddings matrix of BERT, where each subword is a standalone word, can easily be aligned with an orthogonal mapping and obtain slightly better performance than the same subset of fastText. Figure 4b shows embeddings matrix with the average of all contextual embeddings of each word can also be aligned to obtain a decent quality bilingual dictionary, although underperforming fastText. We notice that using contextual representations from higher layers obtain better results compared to lower layers. 5.1.2 Contextual word-level alignment In addition to aligning word representations, we also align representations of two monolingual BERT models in contextual setting, and evaluate performance on cross-lingual transfer for NER and parsing. We take the Transformer layers of each monolingual model up to layer i, and learn a mapping W from layer i of the target model to layer i of the source model. To create that mapping, we use the same Procrustes approach but use a dictionary of parallel contextual words, obtained by running the fastAlign (Dyer et al., 2013) model on the 10k XNLI parallel sentences. For each downstream task, we learn task-specific layers on top of i-th English layer: four Transformer layers and a task-specific layer. We learn these on the training set, but keep the first i pretrained layers freezed. After training these taskspecific parameters, we encode (say) a Chinese sentence with the first i layers of the target Chinese BERT model, project the contextualized representations back to the English space using the W we learned, and then use the task-specific layers for NER and parsing. In Figure 5, we vary i from the embedding layer (layer 0) to the last layer (layer 8) and present the results of our approach on parsing and NER. We also report results using the first i layers of a bilingual MLM (biMLM). 2 We show that aligning monolingual models (MLM align) obtain relatively good performance even though they perform worse than bilingual MLM, except for parsing on EnglishFrench. The results of monolingual alignment generally shows that we can align contextual representations of monolingual BERT models with a simple linear mapping and use this approach for crosslingual transfer. We also observe that the model obtains the highest transfer performance with the middle layer representation alignment, and not the last layers. The performance gap between monolingual MLM alignment and bilingual MLM is higher in NER compared to parsing, suggesting the syntactic information needed for parsing might be easier to align with a simple mapping while entity information requires more explicit entity alignment. 5.1.3 Sentence-level alignment In this case, X and Y are obtained by average pooling subword representation (excluding special token) of sentences at each layer of monolingual BERT. We use multi-way parallel sentences from XNLI for alignment supervision and Tatoeba (Schwenk et al., 2019) for evaluation. 2In Appendix A, we also present the same alignment step with biMLM but only observed improvement in parsing. 6029 en-fr en-de en-ru en-zh 40 50 60 70 80 P@1 BERT fastText (a) Non-contextual word embeddings alignment 0 2 4 6 8 Layer 30 40 50 60 70 80 P@1 pair en-fr en-de en-ru en-zh model BERT fastText (b) Contextual word embedding alignment Figure 4: Alignment of word-level representations from monolingual BERT models on subset of MUSE benchmark. Figure 4a and Figure 4b are not comparable due to different embedding vocabularies. 0 1 2 3 4 5 6 7 8 Layer 30 40 50 60 70 80 F1 NER 0 1 2 3 4 5 6 7 8 Layer 20 30 40 50 60 70 LAS Parsing pair en-fr en-ru en-zh model MLM align biMLM Figure 5: Contextual representation alignment of different layers for zero-shot cross-lingual transfer. Figure 6 shows the sentence similarity search results with nearest neighbor search and cosine similarity, evaluated by precision at 1, with four language pairs. Here the best result is obtained at lower layers. The performance is surprisingly good given we only use 10k parallel sentences to learn the alignment without fine-tuning at all. As a reference, the state-of-the-art performance is over 95%, obtained by LASER (Artetxe and Schwenk, 2019) trained with millions of parallel sentences. 0 2 4 6 8 Layer 60 70 80 90 100 Accuracy pair en-fr en-de en-ru en-zh model BERT LASER Figure 6: Parallel sentence retrieval accuracy after Procrustes alignment of monolingual BERT models. 5.1.4 Conclusion These findings demonstrate that both word-level, contextual word-level, and sentence-level BERT representations can be aligned with a simple orthogonal mapping. Similar to the alignment of word embeddings (Mikolov et al., 2013), this shows that BERT models are similar across languages. This result gives more intuition on why mere parameter sharing is sufficient for multilingual representations to emerge in multilingual masked language models. 5.2 Neural network similarity Based on the work of Kornblith et al. (2019), we examine the centered kernel alignment (CKA), a neural network similarity index that improves upon canonical correlation analysis (CCA), and use it to measure the similarity across both monolingual and bilingual masked language models. The linear CKA is both invariant to orthogonal transformation and isotropic scaling, but are not invertible to any linear transform. The linear CKA similarity measure is defined as follows: CKA(X, Y ) = ∥Y T X∥2 F (∥XT X∥F∥Y T Y ∥F), 6030 Bilingual Monolingual Random L0 L1 L2 L3 L4 L5 L6 L7 L8 AVER 0.76 0.75 0.52 0.75 0.77 0.6 0.74 0.74 0.58 0.75 0.71 0.58 0.73 0.66 0.6 0.69 0.58 0.52 0.64 0.48 0.44 0.48 0.24 0.32 0.55 0.4 0.3 0.68 0.59 0.5 en-en' Bilingual Monolingual Random 0.61 0.65 0.46 0.74 0.71 0.55 0.71 0.7 0.52 0.73 0.7 0.53 0.73 0.64 0.55 0.72 0.59 0.48 0.71 0.5 0.41 0.67 0.34 0.31 0.62 0.4 0.28 0.69 0.58 0.46 en-fr Bilingual Monolingual Random 0.66 0.64 0.46 0.76 0.7 0.54 0.72 0.69 0.52 0.73 0.69 0.54 0.73 0.63 0.56 0.74 0.6 0.49 0.7 0.52 0.42 0.6 0.39 0.31 0.64 0.43 0.28 0.7 0.59 0.46 en-de Bilingual Monolingual Random 0.56 0.56 0.42 0.67 0.65 0.5 0.64 0.63 0.47 0.65 0.64 0.48 0.65 0.61 0.5 0.64 0.56 0.44 0.63 0.5 0.37 0.6 0.34 0.29 0.5 0.39 0.26 0.62 0.54 0.41 en-ru Bilingual Monolingual Random 0.56 0.6 0.44 0.65 0.67 0.51 0.61 0.65 0.49 0.59 0.64 0.5 0.58 0.6 0.52 0.59 0.56 0.46 0.57 0.51 0.39 0.5 0.37 0.3 0.51 0.4 0.27 0.57 0.56 0.43 en-zh Figure 7: CKA similarity of mean-pooled multi-way parallel sentence representation at each layers. Note en′ corresponds to paraphrases of en obtained from back-translation (en-fr-en′). Random encoder is only used by non-Engligh sentences. L0 is the embeddings layers while L1 to L8 are the corresponding transformer layers. The average row is the average of 9 (L0-L8) similarity measurements. where X and Y correspond respectively to the matrix of the d-dimensional mean-pooled (excluding special token) subword representations at layer l of the n parallel source and target sentences. In Figure 7, we show the CKA similarity of monolingual models, compared with bilingual models and random encoders, of multi-way parallel sentences (Conneau et al., 2018) for five languages pair: English to English′ (obtained by backtranslation from French), French, German, Russian, and Chinese. The monolingual en′ is trained on the same data as en but with different random seed and the bilingual en-en′ is trained on English data but with separate embeddings matrix as in §4.3. The rest of the bilingual MLM is trained with the Default setting. We only use random encoder for non-English sentences. Figure 7 shows bilingual models have slightly higher similarity compared to monolingual models with random encoders serving as a lower bound. Despite the slightly lower similarity between monolingual models, it still explains the alignment performance in §5.1. Because the measurement is also invariant to orthogonal mapping, the CKA similarity is highly correlated with the sentence-level alignment performance in Figure 6 with over 0.9 Pearson correlation for all four languages pairs. For monolingual and bilingual models, the first few layers have the highest similarity, which explains why Wu and Dredze (2019) finds freezing bottom layers of mBERT helps cross-lingual transfer. The similarity gap between monolingual model and bilingual model decrease as the languages pair become more distant. In other words, when languages are similar, using the same model increase representation similarity. On the other hand, when languages are dissimilar, using the same model does not help representation similarity much. Future work could consider how to best train multilingual models covering distantly related languages. 6 Discussion In this paper, we show that multilingual representations can emerge from unsupervised multilingual masked language models with only parameter sharing of some Transformer layers. Even without any anchor points, the model can still learn to map representations coming from different languages in a single shared embedding space. We also show that isomorphic embedding spaces emerge from monolingual masked language models in different languages, similar to word2vec embedding spaces (Mikolov et al., 2013). By using a linear mapping, we are able to align the embedding layers and the contextual representations of Transformers trained in different languages. We also use the CKA neural network similarity index to probe the similarity between BERT Models and show that the early layers of the Transformers are more similar across languages than the last layers. All of these effects were stronger for more closely related languages, suggesting there is room for significant improvements on more distant language pairs. 6031 References Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3874–3884, Minneapolis, Minnesota. Association for Computational Linguistics. Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462, Vancouver, Canada. Association for Computational Linguistics. Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2019. On the cross-lingual transferability of monolingual representations. arXiv preprint arXiv:1910.11856. Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597–610. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475–2485, Brussels, Belgium. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Timothy Dozat and Christopher D Manning. 2016. Deep biaffine attention for neural dependency parsing. arXiv preprint arXiv:1611.01734. Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644–648, Atlanta, Georgia. Association for Computational Linguistics. Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328–339, Melbourne, Australia. Association for Computational Linguistics. Haoyang Huang, Yaobo Liang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, and Ming Zhou. 2019. Unicoder: A universal language encoder by pretraining with multiple cross-lingual tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2485–2494, Hong Kong, China. Association for Computational Linguistics. David Kamholz, Jonathan Pool, and Susan Colowick. 2014. PanLex: Building a resource for panlingual lexical translation. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 3145– 3150, Reykjavik, Iceland. European Language Resources Association (ELRA). K Karthikeyan, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilingual bert: An empirical study. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Association for Computational Linguistics. Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. 2019. Similarity of neural network representations revisited. International Conference on Machine Learning. Sneha Kudugunta, Ankur Bapna, Isaac Caswell, and Orhan Firat. 2019. Investigating multilingual NMT representations at scale. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International 6032 Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1565–1575, Hong Kong, China. Association for Computational Linguistics. Aarre Laakso and Garrison Cottrell. 2000. Content and cluster analysis: assessing representational similarity in neural systems. Philosophical psychology, 13(1):47–76. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. arXiv preprint arXiv:1901.07291. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049, Brussels, Belgium. Association for Computational Linguistics. Gina-Anne Levow. 2006. The third international Chinese language processing bakeoff: Word segmentation and named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 108–117, Sydney, Australia. Association for Computational Linguistics. Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John E Hopcroft. 2016. Convergent learning: Do different neural networks learn the same representations? In Iclr. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Ari Morcos, Maithra Raghu, and Samy Bengio. 2018. Insights on representational similarity in neural networks with canonical correlation. In Advances in Neural Information Processing Systems, pages 5727–5736. Phoebe Mulcaire, Jungo Kasai, and Noah A. Smith. 2019. Polyglot contextual representations improve crosslingual transfer. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3912–3918, Minneapolis, Minnesota. Association for Computational Linguistics. Joakim et al. Nivre. 2018. Universal dependencies 2.3. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles University. Aitor Ormazabal, Mikel Artetxe, Gorka Labaka, Aitor Soroa, and Eneko Agirre. 2019. Analyzing the limitations of cross-lingual word embedding mappings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4990–4995, Florence, Italy. Association for Computational Linguistics. Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Crosslingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946–1958, Vancouver, Canada. Association for Computational Linguistics. Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R. Gormley, and Graham Neubig. 2019. Bilingual lexicon induction with semi-supervision in non-isometric embedding spaces. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 184–193, Florence, Italy. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996– 5001, Florence, Italy. Association for Computational Linguistics. Ofir Press and Lior Wolf. 2017. Using the output embedding to improve language models. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 157–163, Valencia, Spain. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openaiassets/researchcovers/languageunsupervised/language understanding paper. pdf. Maithra Raghu, Justin Gilmer, Jason Yosinski, and Jascha Sohl-Dickstein. 2017. Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. In Advances in Neural Information Processing Systems, pages 6076– 6085. Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. Cross-lingual alignment of contextual word embeddings, with applications to zeroshot dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1599–1613, Minneapolis, Minnesota. Association for Computational Linguistics. 6033 Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzman. 2019. Wikimatrix: Mining 135m parallel sentences in 1620 language pairs from wikipedia. arXiv preprint arXiv:1907.05791. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. International Conference on Learning Representations. Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778– 788, Melbourne, Australia. Association for Computational Linguistics. Wilson L Taylor. 1953. cloze procedure: A new tool for measuring readability. Journalism Bulletin, 30(4):415–433. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–147. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. Liwei Wang, Lunjia Hu, Jiayuan Gu, Zhiqiang Hu, Yue Wu, Kun He, and John Hopcroft. 2018. Towards understanding learning representations: To what extent do different neural networks learn the same representation. In Advances in Neural Information Processing Systems, pages 9584–9593. Yuxuan Wang, Wanxiang Che, Jiang Guo, Yijia Liu, and Ting Liu. 2019. Cross-lingual BERT transformation for zero-shot dependency parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5721– 5727, Hong Kong, China. Association for Computational Linguistics. Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833–844, Hong Kong, China. Association for Computational Linguistics. Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1959–1970, Vancouver, Canada. Association for Computational Linguistics. Mozhi Zhang, Keyulu Xu, Ken-ichi Kawarabayashi, Stefanie Jegelka, and Jordan Boyd-Graber. 2019. Are girls neko or sh¯ojo? cross-lingual alignment of non-isomorphic embeddings with iterative normalization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3180–3189, Florence, Italy. Association for Computational Linguistics. 6034 A Contextual word-level alignment of bilingual MLM representation 0 2 4 6 8 Layer 30 40 50 60 70 80 F1 NER pair en-fr en-ru en-zh model MLM align biMLM biMLM align 0 2 4 6 8 Layer 20 30 40 50 60 70 LAS Parsing pair en-fr en-ru en-zh model MLM align biMLM biMLM align Figure 8: Contextual representation alignment of different layers for zero-shot cross-lingual transfer.
2020
536
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6035–6044 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6035 FastBERT: a Self-distilling BERT with Adaptive Inference Time Weijie Liu1,2, Peng Zhou2, Zhiruo Wang3, Zhe Zhao2, Haotang Deng2 and Qi Ju2,∗ 1Peking University, Beijing, China 2Tencent Research, Beijing, China 3Beijing Normal University, Beijing, China [email protected], {rickzhou, nlpzhezhao, haotangdeng, damonju}@tencent.com, [email protected] Abstract Pre-trained language models like BERT have proven to be highly performant. However, they are often computationally expensive in many practical scenarios, for such heavy models can hardly be readily implemented with limited resources. To improve their efficiency with an assured model performance, we propose a novel speed-tunable FastBERT with adaptive inference time. The speed at inference can be flexibly adjusted under varying demands, while redundant calculation of samples is avoided. Moreover, this model adopts a unique selfdistillation mechanism at fine-tuning, further enabling a greater computational efficacy with minimal loss in performance. Our model achieves promising results in twelve English and Chinese datasets. It is able to speed up by a wide range from 1 to 12 times than BERT if given different speedup thresholds to make a speed-performance tradeoff. 1 Introduction Last two years have witnessed significant improvements brought by language pre-training, such as BERT (Devlin et al., 2019), GPT (Radford et al., 2018), and XLNet (Yang et al., 2019). By pretraining on unlabeled corpus and fine-tuning on labeled ones, BERT-like models achieved huge gains on many Natural Language Processing tasks. Despite this gain in accuracy, these models have greater costs in computation and slower speed at inference, which severely impairs their practicalities. Actual settings, especially with limited time and resources in the industry, can hardly enable such models into operation. For example, in tasks like sentence matching and text classification, one often requires to process billions of requests per second. What’s more, the number of requests varies with time. In the case of an online shopping site, the ∗Corresponding author: Qi Ju ([email protected]) number of requests during the holidays is five to ten times more than that of the workdays. A large number of servers need to be deployed to enable BERT in industrial settings, and many spare servers need to be reserved to cope with the peak period of requests, demanding huge costs. To improve their usability, many attempts in model acceleration have been made, such as quantinization (Gong et al., 2014), weights pruning (Han et al., 2015), and knowledge distillation (KD) (Romero et al., 2014). As one of the most popular methods, KD requires additional smaller student models that depend entirely on the bigger teacher model and trade task accuracy for ease in computation (Hinton et al., 2015). Reducing model sizes to achieve acceptable speed-accuracy balances, however, can only solve the problem halfway, for the model is still set as fixated, rendering them unable to cope with drastic changes in request amount. By inspecting many NLP datasets (Wang et al., 2018), we discerned that the samples have different levels of difficulty. Heavy models may overcalculate the simple inputs, while lighter ones are prone to fail in complex samples. As recent studies (Kovaleva et al., 2019) have shown redundancy in pre-training models, it is useful to design a onesize-fits-all model that caters to samples with varying complexity and gains computational efficacy with the least loss of accuracy. Based on this appeal, we propose FastBERT, a pre-trained model with a sample-wise adaptive mechanism. It can adjust the number of executed layers dynamically to reduce computational steps. This model also has a unique self-distillation process that requires minimal changes to the structure, achieving faster yet as accurate outcomes within a single framework. Our model not only reaches a comparable speedup (by 2 to 11 times) to the BERT model, but also attains competitive accuracy in comparison to heavier pre-training models. 6036 Experimental results on six Chinese and six English NLP tasks have demonstrated that FastBERT achieves a huge retrench in computation with very little loss in accuracy. The main contributions of this paper can be summarized as follows: • This paper proposes a practical speed-tunable BERT model, namely FastBERT, that balances the speed and accuracy in the response of varying request amounts; • The sample-wise adaptive mechanism and the self-distillation mechanism are combined to improve the inference time of NLP model for the first time. Their efficacy is verified on twelve NLP datasets; • The code is publicly available at https:// github.com/autoliuweijie/FastBERT. 2 Related work BERT (Devlin et al., 2019) can learn universal knowledge from mass unlabeled data and produce more performant outcomes. Many works have followed: RoBERTa (Liu et al., 2019) that uses larger corpus and longer training steps. T5 (Raffel et al., 2019) that scales up the model size even more. UER (Zhao et al., 2019) pre-trains BERT in different Chinese corpora. K-BERT (Liu et al., 2020) injects knowledge graph into BERT model. These models achieve increased accuracy with heavier settings and even more data. However, such unwieldy sizes are often hampered under stringent conditions. To be more specific, BERT-base contains 110 million parameters by stacking twelve Transformer blocks (Vaswani et al., 2017), while BERT-large expands its size to even 24 layers. ALBERT (Lan et al., 2019) shares the parameters of each layer to reduce the model size. Obviously, the inference speed for these models would be much slower than classic architectures (e.g., CNN (Kim, 2014), RNN (Wang, 2018), etc). We think a large proportion of computation is caused by redundant calculation. Knowledge distillation: Many attempts have been made to distill heavy models (teachers) into their lighter counterparts (students). PKD-BERT (Sun et al., 2019a) adopts an incremental extraction process that learns generalizations from intermediate layers of the teacher model. TinyBERT (Jiao et al., 2019) performs a two-stage learning involving both general-domain pre-training and taskspecific fine-tuning. DistilBERT (Sanh et al., 2019) Big Model (Teacher) Softmax Small Model (Student) Softmax Input x Loss(𝑃", 𝑃$) Prediction 𝑃" Prediction 𝑃$ Figure 1: Classic knowledge distillation approach: Distill a small model using a separate big model. further leveraged the inductive bias within large models by introducing a triple loss. As shown in Figure 1, student model often require a separated structure, whose effect however, depends mainly on the gains of the teacher. They are as indiscriminate to individual cases as their teachers, and only get faster in the cost of degraded performance. Adaptive inference: Conventional approaches in adaptive computations are performed token-wise or patch-wise, who either adds recurrent steps to individual tokens (Graves, 2016) or dynamically adjusts the number of executed layers inside discrete regions of images (Teerapittayanon et al., 2016; Figurnov et al., 2017). To the best of our knowledge, there has been no work in applying adaptive mechanisms to NLP pre-training language models for efficiency improvements so far. 3 Methodology Distinct to the above efforts, our approach fusions the adaptation and distillation into a novel speed-up approach, shown in Figure 2, achieving competitive results in both accuracy and efficiency. 3.1 Model architecture As shown in Figure 2, FastBERT consists of backbone and branches. The backbone is built upon 12-layers Transformer encoder with an additional teacher-classifier, while the branches include student-classifiers which are appended to each Transformer output to enable early outputs. 3.1.1 Backbone The backbone consists of three parts: the embedding layer, the encoder containing stacks of Transformer blocks (Vaswani et al., 2017), and the teacher classifier. The structure of the embedding layer and the encoder conform with those of BERT 6037 Transformer 0 Transformer 1 Transformer 2 Transformer L-1 … This book is really good! Not too bad, it is worth reading. Excellent! but a bit difficult to understand. Written really bad. Embedding layer One batch of input sentences StudentClassifier 0 StudentClassifier 1 StudentClassifier 2 TeacherClassifier This book is really good! Written really bad. Backbone Not too bad, it is worth reading. Excellent! but a bit difficult to understand. Not too bad, it is worth reading. Excellent! but a bit difficult to understand. Excellent! but a bit difficult to understand. Excellent! but a bit difficult to understand. 0 50 100 pos. neu. neg. … pos. neg. pos. neu. neg. pos. neu. neg. pos. neu. pos. neg. pos. neu. pos. neg. Predicted labels 0 50 100 pos. neu. neg. 0 50 100 pos. neu. neg. 0 50 100 pos. neu. neg. 0 50 100 pos. neu. neg. 0 50 100 pos. neu. neg. 0 50 100 pos. neu. neg. 0 50 100 pos. neu. neg. Low uncertainty High uncertainty Branch Figure 2: The inference process of FastBERT, where the number of executed layers with each sample varies based on its complexity. This illustrates a sample-wise adaptive mechanism. Taking a batch of inputs (batch size = 4) as an example, the Transformer0 and Student-classifier0 inferred their labels as probability distributions and calculate the individual uncertainty. Cases with low uncertainty are immediately removed from the batch, while those with higher uncertainty are sent to the next layer for further inference. (Devlin et al., 2019). Given the sentence length n, an input sentence s = [w0, w1, ...wn] will be transformed by the embedding layers to a sequence of vector representations e like (1), e = Embedding(s), (1) where e is the summation of word, position, and segment embeddings. Next, the transformer blocks in the encoder performs a layer-by-layer feature extraction as (2), hi = Transformer i(hi−1), (2) where hi (i = −1, 0, 1, ..., L −1) is the output features at the ith layer, and h−1 = e. L is the number of Transformer layers. Following the final encoding output is a teacher classifier that extracts in-domain features for downstream inferences. It has a fully-connected layer narrowing the dimension from 768 to 128, a selfattention joining a fully-connected layer without changes in vector size, and a fully-connected layer with a softmax function projecting vectors to an N-class indicator pt as in (3), where N is the taskspecific number of classes. pt = Teacher Classifier(hL−1). (3) 3.1.2 Branches To provide FastBERT with more adaptability, multiple branches, i.e. the student classifiers, in the same architecture with the teacher are added to the output of each Transformer block to enable early outputs, especially in those simple cases. The student classifiers can be described as (4), psi = Student Classifier i(hi). (4) The student classifier is designed carefully to balance model accuracy and inference speed, for simple networks may impair the performance, while a heavy attention module severely slows down the inference speed. Our classifier has proven to be lighter with ensured competitive accuracy, detailed verifications are showcased in Section 4.1. 3.2 Model training FastBERT requires respective training steps for the backbone and the student classifiers. The parameters in one module is always frozen while the other module is being trained. The model is trained in preparation for downstream inference with three steps: the major backbone pre-training, entire backbone fine-tuning, and self-distillation for student classifiers. 3.2.1 Pre-training The pre-training of backbone resembles that of BERT in the same way that our backbone resembles BERT. Any pre-training method used for BERT-like models (e.g., BERT-WWM (Cui et al., 2019), RoBERTa (Liu et al., 2019), and ERNIE 6038 (Sun et al., 2019b)) can be directly applied. Note that the teacher classifier, as it is only used for inference, stays unaffected at this time. Also conveniently, FastBERT does not even need to perform pre-training by itself, for it can load high-quality pre-trained models freely. 3.2.2 Fine-tuning for backbone For each downstream task, we plug in the taskspecific data into the model, fine-tuning both the major backbone and the teacher classifier. The structure of the teacher classifier is as previously described. At this stage, all student classifiers are not enabled. 3.2.3 Self-distillation for branch With the backbone well-trained for knowledge extraction, its output, as a high-quality soft-label containing both the original embedding and the generalized knowledge, is distilled for training student classifiers. As student are mutually independent, their predictions ps are compared with the teacher soft-label pt respectively, with the differences measured by KL-Divergence in (5), DKL(ps, pt) = N X i=1 ps(i) · log ps(i) pt(j). (5) As there are L −1 student classifiers in the FastBERT, the sum of their KL-Divergences is used as the total loss for self-distillation, which is formulated in (6), Loss(ps0, ..., psL−2, pt) = L−2 X i=0 DKL(psi, pt), (6) where psi refers to the probability distribution of the output from student-classifier i. Since this process only requires the teacher‘s output, we are free to use an unlimited number of unlabeled data, instead of being restricted to the labeled ones. This provides us with sufficient resources for self-distillation, which means we can always improve the student performance as long as the teacher allows. Moreover, our method differs from the previous distillation method, for the teacher and student outputs lie within the same model. This learning process does not require additional pretraining structures, making the distillation entirely a learning process by self. 3.3 Adaptive inference With the above steps, FastBERT is well-prepared to perform inference in an adaptive manner, which means we can adjust the number of executed encoding layers within the model according to the sample complexity. At each Transformer layer, we measure for each sample on whether the current inference is credible enough to be terminated. Given an input sequence, the uncertainty of a student classifier’s output ps is computed with a normalized entropy in (7), Uncertainty = PN i=1 ps(i) log ps(i) log 1 N , (7) where ps is the distribution of output probability, and N is the number of labeled classes. With the definition of the uncertainty, we make an important hypothesis. Hypothesis 1. LUHA: the Lower the Uncertainty, the Higher the Accuracy. Definition 1. Speed: The threshold to distinguish high and low uncertainty. LUHA is verified in Section 4.4. Both Uncertainty and Speed range between 0 and 1. The adaptive inference mechanism can be described as: At each layer of FastBERT, the corresponding student classifier will predict the label of each sample with measured Uncertainty. Samples with Uncertainty below the Speed will be sifted to early outputs, while samples with Uncertainty above the Speed will move on to the next layer. Intuitively, with a higher Speed, fewer samples will be sent to higher layers, and overall inference speed will be faster, and vice versa. Therefore, Speed can be used as a halt value for weighing the inference accuracy and efficiency. Table 1: FLOPs of each operation within the FastBERT (M = Million, N = the number of labels). Operation Sub-operation FLOPs Total Transformer Self-attention (768 →768) 603.0M 1809.9M Feedforward (768 →3072 →768) 1207.9M Classifier Fully-connect (768 →128) 25.1M 46.1M Self-attention (128 →128) 16.8M Fully-connect (128 →128) 4.2M Fully-connect (128 →N) 6039 Table 2: Comparison of accuracy (Acc.) and FLOPs (speedup) between FastBERT and Baselines in six Chinese datasets and six English datasets. Dataset/ Model ChnSentiCorp Book review Shopping review LCQMC Weibo THUCNews Acc. FLOPs (speedup) Acc. FLOPs (speedup) Acc. FLOPs (speedup) Acc. FLOPs (speedup) Acc. FLOPs (speedup) Acc. FLOPs (speedup) BERT 95.25 21785M (1.00x) 86.88 21785M (1.00x) 96.84 21785M (1.00x) 86.68 21785M (1.00x) 97.69 21785M (1.00x) 96.71 21785M (1.00x) DistilBERT (6 layers) 88.58 10918M (2.00x) 83.31 10918M (2.00x) 95.40 10918M (2.00x) 84.12 10918M (2.00x) 97.69 10918M (2.00x) 95.54 10918M (2.00x) DistilBERT (3 layers) 87.33 5428M (4.01x) 81.17 5428M (4.01x) 94.84 5428M (4.01x) 84.07 5428M (4.01x) 97.58 5428M (4.01x) 95.14 5428M (4.01x) DistilBERT (1 layers) 81.33 1858M (11.72x) 77.40 1858M (11.72x) 91.35 1858M (11.72x) 71.34 1858M (11.72x) 96.90 1858M (11.72x) 91.13 1858M (11.72x) FastBERT (speed=0.1) 95.25 10741M (2.02x) 86.88 13613M (1.60x) 96.79 4885M (4.45x) 86.59 12930M (1.68x) 97.71 3691M (5.90x) 96.71 3595M (6.05x) FastBERT (speed=0.5) 92.00 3191M (6.82x) 86.64 5170M (4.21x) 96.42 2517M (8.65x) 84.05 6352M (3.42x) 97.72 3341M (6.51x) 95.64 1979M (11.00x) FastBERT (speed=0.8) 89.75 2315M (9.40x) 85.14 3012M (7.23x) 95.72 2087M (10.04x) 77.45 3310M (6.57x) 97.69 1982M (10.09x) 94.97 1854M (11.74x) Dataset/ Model Ag.news Amz.F Dbpedia Yahoo Yelp.F Yelp.P Acc. FLOPs (speedup) Acc. FLOPs (speedup) Acc. FLOPs (speedup) Acc. FLOPs (speedup) Acc. FLOPs (speedup) Acc. FLOPs (speedup) BERT 94.47 21785M (1.00x) 65.50 21785M (1.00x) 99.31 21785M (1.00x) 77.36 21785M (1.00x) 65.93 21785M (1.00x) 96.04 21785M (1.00x) DistilBERT (6 layers) 94.64 10872M (2.00x) 64.05 10872M (2.00x) 99.10 10872M (2.00x) 76.73 10872M (2.00x) 64.25 10872M (2.00x) 95.31 10872M (2.00x) DistilBERT (3 layers) 93.98 5436M (4.00x) 63.84 5436M (4.00x) 99.05 5436M (4.00x) 76.56 5436M (4.00x) 63.50 5436M (4.00x) 93.23 5436M (4.00x) DistilBERT (1 layers) 92.88 1816M (12.00x) 59.48 1816M (12.00x) 98.95 1816M (12.00x) 74.93 1816M (12.00x) 58.59 1816M (12.00x) 91.59 1816M (12.00x) FastBERT (speed=0.1) 94.38 6013M (3.62x) 65.50 21005M (1.03x) 99.28 2060M (10.57x) 77.37 16172M (1.30x) 65.93 20659M (1.05x) 95.99 6668M (3.26x) FastBERT (speed=0.5) 93.14 2108M (10.33x) 64.64 10047M (2.16x) 99.05 1854M (11.74x) 76.57 4852M (4.48x) 64.73 9827M (2.21x) 95.32 3456M (6.30x) FastBERT (speed=0.8) 92.53 1858M (11.72x) 61.70 2356M (9.24x) 99.04 1853M (11.75x) 75.05 1965M (11.08x) 60.66 2602M (8.37x) 94.31 2460M (8.85x) 4 Experimental results In this section, we will verify the effectiveness of FastBERT on twelve NLP datasets (six in English and six in Chinese) with detailed explanations. 4.1 FLOPs analysis Floating-point operations (FLOPs) is a measure of the computational complexity of models, which indicates the number of floating-point operations that the model performs for a single process. The FLOPs has nothing to do with the model’s operating environment (CPU, GPU or TPU) and only reveals the computational complexity. Generally speaking, the bigger the model’s FLOPs is, the longer the inference time will be. With the same accuracy, models with low FLOPs are more efficient and more suitable for industrial uses. We list the measured FLOPs of both structures in Table 1, from which we can infer that, the calculation load (FLOPs) of the Classifier is much lighter than that of the Transformer. This is the basis of the speed-up of FastBERT, for although it adds additional classifiers, it achieves acceleration by reducing more computation in Transformers. 4.2 Baseline and dataset 4.2.1 Baseline In this section, we compare FastBERT against two baselines: • BERT1 The 12-layer BERT-base model was pre-trained on Wiki corpus and released by Google (Devlin et al., 2019). • DistilBERT2 The most famous distillation method of BERT with 6 layers was released by Huggingface (Sanh et al., 2019). In addition, we use the same method to distill the DistilBERT with 3 and 1 layer(s), respectively. 4.2.2 Dataset To verify the effectiveness of FastBERT, especially in industrial scenarios, six Chinese and six English datasets pressing closer to actual applications are used. The six Chinese datasets include 1https://github.com/google-research/ bert 2https://github.com/huggingface/ transformers/tree/master/examples/ distillation 6040 Ag.News Amz.F Dbpedia Yahoo Yelp.F Yelp.P Speedup (times) 0x 2x 4x 6x 8x 10x 12x Speed 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Acc. 0.6 0.7 0.8 0.9 1.0 Speed 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Acc. 0.6 0.7 0.8 0.9 1.0 Speedup (times) 0x 1x 2x 3x 4x 5x 6x 7x 8x 9x 10x 11x 12x (d) (e) (f) ChnsentiCorp Book review Shopping review LCQMC Weibo THUCNews Speedup (times) 0x 2x 4x 6x 8x 10x 12x Speed 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Acc. 0.7 0.8 0.9 1.0 Speed 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Acc. 0.7 0.8 0.9 1.0 Speedup (times) 0x 1x 2x 3x 4x 5x 6x 7x 8x 9x 10x 11x 12x (a) (b) (c) Figure 3: The trade-offs of FastBERT on twelve datasets (six in Chinese and six in English): (a) and (d) are SpeedAccuracy relations, showing changes of Speed (the threshold of Uncertainty) in dependence of the accuracy; (b) and (e) are Speed-Speedup relations, indicating that the Speed manages the adaptibility of FastBERT; (c) and (f) are the Speedup-Accuracy relations, i.e. the trade-off between efficiency and accuracy. the sentence classification tasks (ChnSentiCorp, Book review(Qiu et al., 2018), Shopping review, Weibo and THUCNews) and a sentences-matching task (LCQMC(Liu et al., 2018)). All the Chinese datasets are available at the FastBERT project. The six English datasets (Ag.News, Amz.F, DBpedia, Yahoo, Yelp.F, and Yelp.P) are sentence classification tasks and were released in (Zhang et al., 2015). 4.3 Performance comparison To perform a fair comparison, BERT / DistilBERT / FastBERT all adopt the same configuration as follows. In this paper, L = 12. The number of self-attention heads, the hidden dimension of embedding vectors, and the max length of the input sentence are set to 12, 768 and 128 respectively. Both FastBERT and BERT use pre-trained parameters provided by Google, while DistilBERT is pretrained with (Sanh et al., 2019). We fine-tune these models using the AdamW (Loshchilov and Hutter) algorithm, a 2 × 10−5 learning rate, and a 0.1 warmup. Then, we select the model with the best accuracy in 3 epochs. For the self-distillation of FastBERT, we increase the learning rate to 2×10−4 and distill it for 5 epochs. We evaluate the text inference capabilities of these models on the twelve datasets and report their accuracy (Acc.) and sample-averaged FLOPs under different Speed values. The result of comparisons are shown in Table 2, where the Speedup is obtained by using BERT as the benchmark. It can be observed that with the setting of Speed = 0.1, FastBERT can speed up 2 to 5 times without losing accuracy for most datasets. If a little loss of accuracy is tolerated, FastBERT can be 7 to 11 times faster than BERT. Comparing to DistilBERT, FastBERT trades less accuracy to catch higher efficiency. Figure 3 illustrates FastBERT’s tradeoff in accuracy and efficiency. The speedup ratio of FastBERT are free to be adjusted between 1 and 12, while the loss of accuracy remains small, which is a very attractive feature in the industry. 98.92 96.66 93.99 90.16 85.76 83.21 77.60 76.05 66.17 57.82 96.90 90.09 83.44 79.05 73.51 74.18 62.45 63.78 62.63 57.18 95.75 83.35 77.14 74.55 69.20 69.88 64.45 66.94 60.91 50.08 StudentClassifier 0 Acc. 60% 80% 100% StudentClassifier 5 Acc. 60% 80% 100% TeacherClassifier Acc. 60% 80% 100% Uncertainty 0.0~0.1 0.1~0.2 0.2~0.3 0.3~0.4 0.4~0.5 0.5~0.6 0.6~0.7 0.7~0.8 0.8~0.9 0.9~1.0 Figure 4: The relation of classifier accuracy and average case uncertainty: Three classifiers at the bottom, in the middle, and on top of the FastBERT were analyzed, and their accuracy within various uncertainty intervals were calculated with the Book Review dataset. 6041 Average = 0.92, median = 0 Set speed = 0.8 Average = 2.3, median = 1 Set speed = 0.5 Average = 3.2, median = 2 Set speed = 0.3 0% 20% 40% 60% 0% 20% 40% 60% 0% 20% 40% 60% Distribution of layers 0 1 2 3 4 5 6 7 8 9 10 11 Figure 5: The distribution of executed layers on average in the Book review dataset, with experiments at three different speeds (0.3, 0.5 and 0.8). 4.4 LUHA hypothesis verification As is described in the Section 3.3, the adaptive inference of FastBERT is based on the LUHA hypothesis, i.e., “the Lower the Uncertainty, the Higher the Accuracy”. Here, we prove this hypothesis using the book review dataset. We intercept the classification results of Student-Classifier0, StudentClassifier5, and Teacher-Classifier in FastBERT, then count their accuracy in each uncertainty interval statistically. As shown in Figure 4, the statistical indexes confirm that the classifier follows the LUHA hypothesis, no matter it sits at the bottom, in the middle or on top of the model. From Figure 4, it is easy to mistakenly conclude that Students has better performance than Teacher due to the fact that the accuracy of Student in each uncertainty range is higher than that of Teacher. This conclusion can be denied by analysis with Figure 6(a) together. For the Teacher, more samples are located in areas with lower uncertainty, while the Students’ samples are nearly uniformly distributed. Therefore the overall accuracy of the Teacher is still higher than that of Students. 4.5 In-depth study In this section, we conduct a set of in-depth analysis of FastBERT from three aspects: the distribution of exit layer, the distribution of sample uncertainty, and the convergence during self-distillation. Layer 0 1 2 3 4 5 6 7 8 9 10 11 Uncertainty 0 0.5 1.0 Speed = 0.0 Speed = 0.3 Speed = 0.5 Speed = 0.8 (a) (b) (c) (d) Layer 0 1 2 3 4 5 6 7 8 9 10 11 Uncertainty 0 0.3 0.5 1.0 Layer 0 1 2 3 4 5 6 7 8 9 10 11 Uncertainty 0 0.5 1.0 Layer 0 1 2 3 4 5 6 7 8 9 10 11 Uncertainty 0 0.5 0.8 1.0 Figure 6: The distribution of Uncertainty at different layers of FastBERT in the Book review dataset: (a) The speed is set to 0.0, which means that all samples will pass through all the twelve layers; (b) ∼(d): The Speed is set to 0.3, 0.5, and 0.8 respectively, iand only the samples with Uncertainty higher than Speed will be sent to the next layer. 4.5.1 Layer distribution In FastBERT, each sample walks through a different number of Transformer layers due to varied complexity. For a certain condition, fewer executed layers often requires less computing resources. As illustrated in Figure 5, we investigate the distribution of exit layers under different constraint of Speeds (0.3, 0.5 and 0.8) in the book review dataset. Take Speed = 0.8 as an example, at the first layer Transformer0, 61% of the samples is able to complete the inference. This significantly eliminates unnecessary calculations in the next eleven layers. 4.5.2 Uncertainty distribution The distribution of sample uncertainty predicted by different student classifiers varies, as is illustrated in Figure 6. Observing these distributions help us to 6042 Fine-tuning (0~3 epochs) Self-distillation (3~8 epochs) Acc. FLOPs Acc. 80% 82% 84% 86% 88% FLOPs 5,000M 10,000M 15,000M 20,000M Epoch 0 1 2 3 4 5 6 7 8 Speed = 0.5 Figure 7: The change in accuracy and FLOPs of FastBERT during fine-tuning and self-distillation with the Book review dataset. The accuracy firstly increases at the fine-tuning stage, while the self-distillation reduces the FLOPs by six times with almost no loss in accuracy. further understand FastBERT. From Figure 6(a), it can be concluded that the higher the layer is posited, the lower the uncertainty with given Speed will be, indicating that the high-layer classifiers more decisive than the lower ones. It is worth noting that at higher layers, there are samples with uncertainty below the threshold of Uncertainty (i.e., the Speed), for these high-layer classifiers may reverse the previous judgments made by the low-layer classifiers. 4.5.3 Convergence of self-distillation Self-distillation is a crucial step to enable FastBERT. This process grants student classifiers with the abilities to infer, thereby offloading work from the teacher classifier. Taking the Book Review dataset as an example, we fine-tune the FastBERT with three epochs then self-distill it for five more epochs. Figure 7 illustrates its convergence in accuracy and FLOPs during fine-tune and selfdistillation. It could be observed that the accuracy increases with fine-tuning, while the FLOPs decrease during the self-distillation stage. 4.6 Ablation study Adaptation and self-distillation are two crucial mechanisms in FastBERT. We have preformed ablation studies to investigate the effects brought by these two mechanisms using the Book Review dataset and the Yelp.P dataset. The results are presented in Table 3, in which ‘without selfdistillation’ implies that all classifiers, including both the teacher and the students, are trained in the fine-tuning; while ‘without adaptive inference’ means that the number of executed layers of each sample is fixated to two or six. Table 3: Results of ablation studies on the Book review and Yelp.P datasets. Config. Book review Yelp.P Acc. FLOPs (speedup) Acc. FLOPs (speedup) FastBERT speed=0.2 86.98 9725M (2.23x) 95.90 52783M (4.12x) speed=0.7 85.69 3621M (6.01x) 94.67 2757M (7.90x) FastBERT without self-distillation speed=0.2 86.22 9921M (2.19x) 95.55 4173M (5.22x) speed=0.7 85.02 4282M (5.08x) 94.54 2371M (9.18x) FastBERT without adaptive inference layer=6 86.42 11123M (1.95x) 95.18 11123M (1.95x) layer=2 82.88 3707M (5.87x) 93.11 3707M (5.87x) From Table 3, we have observed that: (1) At almost the same level of speedup, FastBERT without self-distillation or adaption performs poorer; (2) When the model is accelerated more than five times, downstream accuracy degrades dramatically without adaption. It is safe to conclude that both the adaptation and self-distillation play a key role in FastBERT, which achieves both significant speedups and favorable low losses of accuracy. 5 Conclusion In this paper, we propose a fast version of BERT, namely FastBERT. Specifically, FastBERT adopts a self-distillation mechanism during the training phase and an adaptive mechanism in the inference phase, achieving the goal of gaining more efficiency with less accuracy loss. Self-distillation and adaptive inference are first introduced to NLP model in this paper. In addition, FastBERT has a very practical feature in industrial scenarios, i.e., its inference speed is tunable. Our experiments demonstrate promising results on twelve NLP datasets. Empirical results have shown that FastBERT can be 2 to 3 times faster than BERT without performance degradation. If we slack the tolerated loss in accuracy, the model is free to tune its speedup between 1 and 12 times. Besides, FastBERT remains compatible to the parameter settings of other BERT-like models (e.g., BERTWWM, ERNIE, and RoBERTa), which means these public available models can be readily loaded 6043 for FastBERT initialization. 6 Future work These promising results point to future works in (1) linearizing the Speed-Speedup curve; (2) extending this approach to other pre-training architectures such as XLNet (Yang et al., 2019) and ELMo (Peters et al., 2018); (3) applying FastBERT on a wider range of NLP tasks, such as named entity recognition and machine translation. Acknowledgments This work is funded by 2019 Tencent Rhino-Bird Elite Training Program. Work done while this author was an intern at Tencent. References Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre-training with whole word masking for chinese BERT. arXiv preprint arXiv:1906.08101. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of ACL, pages 4171–4186. Michael Figurnov, Maxwell D Collins, Yukun Zhu, Li Zhang, Jonathan Huang, Dmitry Vetrov, and Ruslan Salakhutdinov. 2017. Spatially adaptive computation time for residual networks. In Proceedings of CVPR, pages 1790–1799. Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. 2014. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115. Alex Graves. 2016. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983. Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. In Advances in NeurIPS, pages 1135–1143. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. Computer Science, 14(7):38–39. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. TinyBERT: Distilling BERT for natural language understanding. arXiv preprint arXiv:1909.10351. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of EMNLP, pages 1746–1751. Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of BERT. In Proceedings of EMNLP-IJCNLP, pages 4356–4365. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2020. K-BERT: Enabling language representation with knowledge graph. In Proceedings of AAAI. Xin Liu, Qingcai Chen, Chong Deng, Huajun Zeng, Jing Chen, Dongfang Li, and Buzhou Tang. 2018. Lcqmc: A large-scale chinese question matching corpus. In Proceedings of the ICCL, pages 1952– 1962. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692. Ilya Loshchilov and Frank Hutter. Fixing weight decay regularization in adam. arXiv preprint arXiv:1711.05101. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of NAACL-HLT, pages 2227–2237. Yuanyuan Qiu, Hongzheng Li, Shen Li, Yingdi Jiang, Renfen Hu, and Lijiao Yang. 2018. Revisiting correlations between intrinsic and extrinsic evaluations of word embeddings. In Proceedings of CCL, pages 209–221. Springer. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Technical report, OpenAI. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2014. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of bert: smaller, faster, cheaper and lighter. In NeurIPS EMC2 Workshop. 6044 Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019a. Patient knowledge distillation for bert model compression. In Proceedings of EMNLP-IJCNLP, pages 4314–4323. Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019b. ERNIE: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223. Surat Teerapittayanon, Bradley McDanel, and HsiangTsung Kung. 2016. Branchynet: Fast inference via early exiting from deep neural networks. In Proceedings of ICPR, pages 2464–2469. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in NeurIPS, pages 5998– 6008. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of EMNLP, pages 353–355. Baoxin Wang. 2018. Disconnected recurrent neural networks for text categorization. In Proceedings of ACL, pages 2311–2320. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in NeurIPS, pages 649–657. Zhe Zhao, Hui Chen, Jinbin Zhang, Xin Zhao, Tao Liu, Wei Lu, Xi Chen, Haotang Deng, Qi Ju, and Xiaoyong Du. 2019. UER: An open-source toolkit for pre-training models. In Proceedings of EMNLPIJCNLP 2019, page 241.
2020
537
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6045–6052 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6045 Incorporating External Knowledge through Pre-training for Natural Language to Code Generation Frank F. Xu∗, Zhengbao Jiang∗, Pengcheng Yin, Bogdan Vasilescu, Graham Neubig Carnegie Mellon University {fangzhex,zhengbaj,pcyin,vasilescu,gneubig}@cs.cmu.edu Abstract Open-domain code generation aims to generate code in a general-purpose programming language (such as Python) from natural language (NL) intents. Motivated by the intuition that developers usually retrieve resources on the web when writing code, we explore the effectiveness of incorporating two varieties of external knowledge into NL-to-code generation: automatically mined NL-code pairs from the online programming QA forum StackOverflow and programming language API documentation. Our evaluations show that combining the two sources with data augmentation and retrieval-based data re-sampling improves the current state-of-the-art by up to 2.2% absolute BLEU score on the code generation testbed CoNaLa. The code and resources are available at https://github.com/ neulab/external-knowledge-codegen. 1 Introduction Semantic parsing, the task of generating machine executable meaning representations from natural language (NL) intents, has generally focused on limited domains (Zelle and Mooney, 1996; Deborah A. Dahl and Shriber, 1994), or domain-specific languages with a limited set of operators (Berant et al., 2013; Quirk et al., 2015; Dong and Lapata, 2016; Liang et al., 2017; Krishnamurthy et al., 2017; Zhong et al., 2017; Yu et al., 2018, 2019b,a). However, recently there has been a move towards applying semantic parsing to automatically generating source code in general-purpose programming languages (Yin et al., 2018; Yao et al., 2018; Lin et al., 2018; Agashe et al., 2019; Yao et al., 2019). Prior work in this area (Xiao et al., 2016; Ling et al., 2016; Rabinovich et al., 2017; Yin and Neubig, 2017, 2018; Dong and Lapata, 2018; Suhr et al., ∗The first two authors contributed equally. Annotated pairs <code, NL> External Knowledge Resources: Pre-train Mined pairs from Parsed pairs from API docs Text-to-Code Gen. Model Noisy but real-use distributed Clean but uniformly distributed Re-sampling w/ Real Distribution Human Curated Data: Real Distribution Estimation Fine-tune Figure 1: Our approach: incorporating external knowledge by data re-sampling, pre-training and fine-tuning. 2018; Iyer et al., 2018; Yin and Neubig, 2019) used a variety of models, especially neural architectures, to achieve good performance. However, open-domain code generation for general-purpose languages like Python is challenging. For example, given the intent to choose a random file from the directory contents of the C drive, ‘C:\\’, one would expect the Python code snippet random.choice(os.listdir(‘C:\\’)), that realizes the given intent. This would involve not just generating syntactically correct code, but also using (and potentially combining) calls to APIs and libraries that implement some of the desired functionality. As we show in § 3, current code generation models still have difficulty generating the correct function calls with appropriate argument placement. For example, given the NL intent above, although the state-of-the-art model by Yin and Neubig (2018) that uses a transition-based method to generate Python abstract syntax trees is guaranteed to generate syntactically correct code, it still incorrectly outputs random.savefig(random( compile(open(‘C:\\’))+100).isoformat()). A known bottleneck to training more accurate code generation models is the limited number of manually annotated training pairs available in existing human-curated datasets, which are insufficient to cover the myriad of ways in which some complex functionality could be implemented in code. However, increasing the size of labeled datasets through additional human annotation is relatively expensive. 6046 It is also the case that human developers rarely reference such paired examples of NL and code, and rather take external resources on the web and modify them into the desired form (Brandt et al., 2009, 2010; Gu et al., 2016). Motivated by these facts, we propose to improve the performance of code generation models through a novel training strategy: pretraining the model on data extracted automatically from external knowledge resources such as existing API documentation, before fine-tuning it on a small manually curated dataset (§ 2.1). Our approach, outlined in Figure 1, combines pairs of NL intents and code snippets mined automatically from the Q&A website StackOverflow (§ 2.2), and API documentation for common software libraries (§ 2.3).1 While our approach is model-agnostic and generally applicable, we implement it on top of a state-of-the-art syntax-based method for code generation, TranX (Yin and Neubig, 2018), with additional hypothesis reranking (Yin and Neubig, 2019). Experiments on the CoNaLa benchmark (Yin et al., 2018) show that incorporating external knowledge through our proposed methods increases BLEU score from 30.1 to 32.3, outperforming the previous state-of-the-art model by up to 2.2% absolute. Qualitatively analyzing a sample of code snippets generated by our model reveals that the generated code is more likely to use the correct API calls for desired functionality and to arrange arguments in the right order. 2 Approach 2.1 Over-arching Framework The overall strategy for incorporating external knowledge that we take on this work is to (1) pretrain the model on the NL-code pairs obtained from external resources, then (2) fine-tune on a small manually curated corpus. This allows the model to first learn on larger amounts of potentially noisy data, while finally being tailored to the actual NL and code we want to model at test time. In order to perform this pre-training we need to convert external data sources into NL-code pairs, and we describe how to do so in the following sections. 2.2 Mined NL-code Pairs When developers code, most will inevitably search online for code snippets demonstrating how to achieve their particular intent. One of the most 1 Of course external knowledge for code covers a large variety of resources, other than these two types. class collections.deque([iterable[, maxlen]]) Returns a new deque object initialized ... append(x) Add x to the right side of the deque. rotate(n=1) Rotate the deque n steps to the right. … heapq.nlargest(n, iterable, key=None) Return a list with the n largest eleŵeŶts froŵ … class methods top-level functions d=collections.deque(iterable) d=collections.deque(iterable,maxlen) d.append(x) d.rotate() d.rotate(n=1) heapq.nlargest(n,iterable) heapq.nlargest(n,iterable,key=None) pre-process Figure 2: Examples from Python API documentation and pre-processed code snippets, including class constructors, methods, and top-level functions. We use red, blue, and green to denote required, optional positional, and optional keyword arguments respectively. prominent resources online is StackOverflow,2 a popular programming QA forum. However, it is not the case that all code on StackOverflow actually reflects the corresponding intent stated by the questioner – some may be methods defining variables or importing necessary libraries, while other code may be completely irrelevant. Yin et al. (2018) propose training a classifier to decide whether an NL-code pair is valid, resulting in a large but noisy parallel corpus of NL intents and source code snippets. The probability assigned by the method can serve as confidence, representing the quality of the automatically mined NL-code pairs. We use these mined pairs as a first source of external knowledge. 2.3 API Documentation Second, motivated by the intuition that much of modern software development relies on libraries, and that developers often turn to programming language and software library references for help while writing code, we consider API documentation as another source of external knowledge. Figure 2 shows some examples from the Python standard library API documentation. It contains descriptions of libraries, classes, methods, functions, and arguments. The documentation is already in a paired form consisting of code signatures and their descriptions. However, the signatures shown in the documentation mainly provide the prototype of the API rather than valid API usages appearing in source code. The text descriptions in the documentation tend to be verbose for clarity, while real questions from developers are usually succinct. We use a few heuristics to transform these to emulate 2https://stackoverflow.com 6047 real inputs a code generation system may face. Most APIs define required and optional arguments in the signature. In real usage, developers usually provide none or only some of those arguments. To simulate this, we permute all possible combinations (with a limit) of the optional arguments and append them to the required arguments, following correct syntax. For class constructors and methods, we create a heuristic variable name based on the class name to store the instantiated class object and to call methods upon. To make concise description for each code snippet created, we preserve only the first sentence in the corresponding documentation, as well as the first sentences that contain mentions of each argument in the snippet. In the rare case where arguments are not found in the original description, we add another sentence containing these arguments to the end of the NL snippet, ensuring all variables in code are covered in the NL. We detail this process in Appendix A. 2.4 Re-sampling API Knowledge External knowledge from different sources has different characteristics. NL-code pairs automatically mined from StackOverflow are good representatives of the questions that developers may ask, but are inevitably noisy. NL-code pairs from API documentation are clean, but there may be a topical distribution shift from real questions asked by developers. For example, the library curses has significantly more API entries than json (178 vs. 17),3 while json is more frequently asked about and used. This distributional shift between pretraining and fine-tuning causes performance degradation, as shown later in § 3.2. To mitigate this problem, we propose a retrievalbased re-sampling method to close the gap between the API documentation and the actual NL-code pairs we want to model. We use both human annotated data Dann and mined data Dmine to model the distribution of NL-code pairs because they are both produced by real users. For each sample in this real usage distribution, we retrieve k NL-code pairs from the set of pairs harvested from API documentation DAPI and aggregate the frequencies of each pair y ∈DAPI being retrieved: freq(y) = X x∈Dann+mined δ(y ∈R(x, DAPI, k)), 3https://docs.python.org/3.7/library/ curses.html and https://docs.python.org/3. 7/library/json.html where R(x, DAPI, k) retrieves the top k most similar samples from DAPI given x, either according to NL intent or code snippet. δ(·) is Kronecker’s delta function, returning 1 if the internal condition is true, and 0 otherwise. We use the BM25 retrieval algorithm (Jones et al., 2000) implemented in ElasticSearch.4 We take this frequency and calculate the probability distribution after smoothing with a temperature τ ∈[1, ∞]: P(y) = freq(y)1/τ/ X y′∈DAPI freq(y′)1/τ As τ changes from 1 to ∞, P(y) shifts from a distribution proportional to the frequency to a uniform distribution. Using this distribution, we can sample NL-code pairs from the API documentation that are more likely to be widely-used API calls. 3 Experiments 3.1 Experimental Settings Dataset and Metric: Although the proposed approach is generally applicable and model-agnostic, for evaluation purposes, we choose CoNaLa (Yin et al., 2018) as the human-annotated dataset (2,179 training, 200 dev and 500 test samples). It covers real-world English queries about Python with diverse intents. We use the same evaluation metric as the CoNaLa benchmark, corpus-level BLEU calculated on target code outputs in test set. Mined Pairs: We use the CoNaLa-Mined (Yin et al., 2018) dataset of 600K NL-code pairs in Python automatically mined from StackOverflow (§ 2.2). We sort all pairs by their confidence scores, and found that approximately top 100K samples are of reasonable quality in terms of code correctness and NL-code correspondence. We therefore choose the top 100K pairs for the experiments. API Documentation Pairs: We parsed all the module documentation including libraries, builtin types and functions included in the Python 3.7.5 distribution.5 After pre-processing (§ 2.3), we create about 13K distinct NL-code pairs (without resampling) from Python API documentation. For fair comparison, we also sample the same number of pairs for the re-sampling setting (§ 2.4). 4https://github.com/elastic/ elasticsearch. When retrieving with code snippets, all the punctuation marks are removed. 5https://docs.python.org/release/3.7. 5/library/index.html 6048 Data Strategy Method BLEU Man 27.20 Man+Mine 50k 27.94 100k 28.14 Man+Mine+API w/o re-sampling 27.84 direct intent 29.66 dist. intent 29.31 direct code 30.26 dist. code 30.69 Man +rerank 30.11 Man+Mine(100k) 31.42 Our best 32.26 Table 1: Performance comparison of different strategies to incorporate external knowledge. Methods: We choose the current state-of-the-art NL-to-code generation model TranX (Yin and Neubig, 2018) with hypothesis reranking (Yin and Neubig, 2019) as the base model. Plus, we incorporate length normalization (Cho et al., 2014) to prevent beam search from favoring shorter results over longer ones. Man denotes training solely on CoNaLa. Man+Mine refers to first pre-training on mined data, then fine-tuning on CoNaLa. Man+Mine+API combines both mined data and API documentation for pre-training. As a comparison to our distribution-based method (denoted by dist., § 2.4), we also attempt to directly retrieve top 5 NL-code pairs from API documents (denoted by direct).6 Implementation Details: We experiment with k = {1, 3, 5} and τ = {1, 2, 5} in re-sampling, and find that k = 1 and τ = 2 perform the best. We follow the original hyper-parameters in TranX, except that we use a batch size of 64 and 10 in pre-training and fine-tuning respectively. 3.2 Results Results are summarized in Table 1. We can first see that by incorporating more noisy mined data during pre-training allows for a small improvement due to increased coverage from the much larger training set. Further, if we add the pairs harvested from API docs for pre-training without re-sampling the performance drops, validating the challenge of distributional shift mentioned in § 2.4. Comparing the two re-sampling strategies direct vs. dist., and two different retrieval targets NL intent vs. code snippet, we can see that dist. performs better with the code snippet as the retrieval target. We expect that using code snippets to re6We choose 5 to obtain comparable amount of pairs. trieve pairs performs better because it makes the generation target, the code snippet, more similar to the real-world distribution, thus better training the decoder. It is also partly because API descriptions are inherently different than questions asked by developers (e.g. they have more verbose wording), causing intent retrieval to be less accurate. Lastly, we apply hypothesis reranking to both the base model and our best approach and find improvements afforded by our proposed strategy of incorporating external knowledge are mostly orthogonal to those from hypothesis reranking. After showing the effectiveness of our proposed re-sampling strategy, we are interested in the performance on more-used versus less-used APIs for the potentially skewed overall performance. We use string matching heuristics to obtain the standard Python APIs used in the dataset and calculated the average frequency of API usages in each data instance. We then select the top 200 and the bottom 200 instances out of the 500 test samples in terms of API usage frequencies. Before and after adding API docs into pre-training, the BLEU score on both splits saw improvements: for high-frequency split, it goes from 28.67 to 30.91 and for low-frequency split, it goes from 27.55 to 30.05, indicating that although the re-sampling would skew towards highfrequency APIs, with the appropriate smoothing temperature experimentation, it will still contribute to performance increases on low-frequency APIs. Besides using BLEU scores to perform holistic evaluation, we also perform more fine-grained analysis of what types of tokens generated are improving. We apply heuristics on the abstract syntax tree of the generated code to identify tokens for API calls and variable names in the test data, and calculated the token-level accuracy for each. The API call accuracy increases from 31.5% to 36.8% and the variable name accuracy from 41.2% to 43.0% after adding external resources, meaning that both the API calls and argument usages are getting better using our approach. 3.3 Case Study We further show selected outputs from both the baseline and our best approach in Table 2. In general, we can see that the NL to code generation task is still challenging, especially with more complex intents that require nested or chained API calls, or functions with more arguments. The vanilla model already can generate basic functions and 6049 Open a file “f.txt” in write mode. ✓f=open(‘f.txt’, ‘w’) ♠f=open(‘f.txt’, ‘f.txt’) ♣f=open(‘f.txt’, ‘w’) lower a string text and remove non-alphanumeric characters aside from space. ✓re.sub(r‘[^\sa−zA−Z0−9]’, ‘’, text). lower().strip() ♠text.decode.translate(text.strip(), ‘non-alphanumeric’, ‘’) ♣re.sub(r‘[^\sa−zA−Z0−9]’, ‘’, text) choose a random file from the directory contents of the C drive, ‘C:\\’. ✓random.choice(os.listdir(‘C:\\’)) ♠random.savefig(random(compile(open(‘C:\\’) )+100).isoformat()) ♣random.choice(os.path.expanduser(‘C:\\’)) Table 2: Examples, where ✓is the ground-truth code snippet, ♠is the original output, and ♣is the output with our proposed methods. Correct and erroneous function calls are marked in blue and red respectively. copy strings/variables to the output, but we observe that incorporating external knowledge improves the results in two main ways: 1) better argument placement for APIs, and 2) better selection of which API call should be used for a certain intent. In the first example, we can see that although the baseline gets the function call “open()” correct, it fails to generate the correct second argument specifying write mode, while our approach is able to successfully generate the appropriate ‘w’. In the second and third example, we can see that the baseline uses the wrong API calls, and sometimes “makes up” APIs on its own (e.g. “random.savefig()”). However, our approach’s outputs, while not perfect, are much more successful at generating correct API calls that actually exist and make sense for the intent. On a closer look, we can observe that both the addition of mined examples and API docs may have brought the improvement. The example of the “open()” function added from API docs uses the default mode “r”, so learning the meaning of “w” argument is due to the added mined real examples, but learning the argument placement (first file name as a string, second a shorthand mode identifier as a character) may have occurred from the API docs. In other examples, “random.choice()” and “re.sub()” both are Python standard library APIs so they are included in the API doc examples. 4 Conclusion and Future Work We proposed a model-agnostic approach based on data augmentation, retrieval and data re-sampling, to incorporate external knowledge into code generation models, which achieved state-of-the-art results on the CoNaLa open-domain code generation task. In the future, evaluation by automatically executing generated code with test cases could be a better way to assess code generation results. It will also likely be useful to generalize our re-sampling procedures to zero-shot scenarios, where a programmer writes a library and documents it, but nobody has used it yet. For example, developers may provide relative estimates of each documented API usages to guide the re-sampling; or we could find nearest neighbors to each API call in terms of semantics and use existing usage statistics as estimates to guide the re-sampling. Acknowledgments This research was supported by NSF Award No. 1815287 “Open-domain, Data-driven Code Synthesis from Natural Language.” References Rajas Agashe, Srinivasan Iyer, and Luke Zettlemoyer. 2019. JuICe: A large scale distantly supervised dataset for open domain context-based code generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5435–5445, Hong Kong, China. Association for Computational Linguistics. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544, Seattle, Washington, USA. Association for Computational Linguistics. Joel Brandt, Mira Dontcheva, Marcos Weskamp, and Scott R Klemmer. 2010. Example-centric programming: integrating web search into the development environment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 513–522. ACM. Joel Brandt, Philip J Guo, Joel Lewenstein, Mira Dontcheva, and Scott R Klemmer. 2009. Two studies of opportunistic programming: interleaving web foraging, learning, and writing code. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1589–1598. ACM. 6050 Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734, Doha, Qatar. Association for Computational Linguistics. Michael Brown William Fisher Kate Hunicke-Smith David Pallett Christine Pao Alexander Rudnicky Deborah A. Dahl, Madeleine Bates and Elizabeth Shriber. 1994. Expanding the scope of the ATIS task: The ATIS-3 corpus. Proceedings of the workshop on Human Language Technology, pages 43–48. Li Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 33–43, Berlin, Germany. Association for Computational Linguistics. Li Dong and Mirella Lapata. 2018. Coarse-to-fine decoding for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 731–742, Melbourne, Australia. Association for Computational Linguistics. Xiaodong Gu, Hongyu Zhang, Dongmei Zhang, and Sunghun Kim. 2016. Deep API learning. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 631–642. ACM. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2018. Mapping language to code in programmatic context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1643–1652, Brussels, Belgium. Association for Computational Linguistics. K. Sparck Jones, S. Walker, and S.E. Robertson. 2000. A probabilistic model of information retrieval: development and comparative experiments: Part 1. Information Processing & Management, 36(6):779 – 808. Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gardner. 2017. Neural semantic parsing with type constraints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1516–1526, Copenhagen, Denmark. Association for Computational Linguistics. Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2017. Neural symbolic machines: Learning semantic parsers on Freebase with weak supervision. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 23–33, Vancouver, Canada. Association for Computational Linguistics. Xi Victoria Lin, Chenglong Wang, Luke Zettlemoyer, and Michael D. Ernst. 2018. NL2Bash: A corpus and semantic parser for natural language interface to the linux operating system. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan. European Languages Resources Association (ELRA). Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk´y, Fumin Wang, and Andrew Senior. 2016. Latent predictor networks for code generation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 599–609, Berlin, Germany. Association for Computational Linguistics. Chris Quirk, Raymond Mooney, and Michel Galley. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 878–888, Beijing, China. Association for Computational Linguistics. Maxim Rabinovich, Mitchell Stern, and Dan Klein. 2017. Abstract syntax networks for code generation and semantic parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1139– 1149, Vancouver, Canada. Association for Computational Linguistics. Alane Suhr, Srinivasan Iyer, and Yoav Artzi. 2018. Learning to map context-dependent sentences to executable formal queries. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2238–2249, New Orleans, Louisiana. Association for Computational Linguistics. Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1341– 1350, Berlin, Germany. Association for Computational Linguistics. Ziyu Yao, Jayavardhan Reddy Peddamail, and Huan Sun. 2019. Coacor: Code annotation for code retrieval with reinforcement learning. In The World Wide Web Conference, pages 2203–2214. ACM. Ziyu Yao, Daniel S Weld, Wei-Peng Chen, and Huan Sun. 2018. Staqc: A systematically mined questioncode dataset from stack overflow. In Proceedings of the 2018 World Wide Web Conference, pages 1693– 1703. International World Wide Web Conferences Steering Committee. 6051 Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham Neubig. 2018. Learning to mine aligned code and natural language pairs from stack overflow. In International Conference on Mining Software Repositories, MSR, pages 476–486. ACM. Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 440–450, Vancouver, Canada. Association for Computational Linguistics. Pengcheng Yin and Graham Neubig. 2018. TRANX: A transition-based neural abstract syntax parser for semantic parsing and code generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 7–12, Brussels, Belgium. Association for Computational Linguistics. Pengcheng Yin and Graham Neubig. 2019. Reranking for neural semantic parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4553–4559, Florence, Italy. Association for Computational Linguistics. Tao Yu, Rui Zhang, Heyang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, Youxuan Jiang, Michihiro Yasunaga, Sungrok Shim, Tao Chen, Alexander Fabbri, Zifan Li, Luyao Chen, Yuwen Zhang, Shreya Dixit, Vincent Zhang, Caiming Xiong, Richard Socher, Walter Lasecki, and Dragomir Radev. 2019a. CoSQL: A conversational text-to-SQL challenge towards crossdomain natural language interfaces to databases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1962– 1979, Hong Kong, China. Association for Computational Linguistics. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A largescale human-labeled dataset for complex and crossdomain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911–3921, Brussels, Belgium. Association for Computational Linguistics. Tao Yu, Rui Zhang, Michihiro Yasunaga, Yi Chern Tan, Xi Victoria Lin, Suyi Li, Heyang Er, Irene Li, Bo Pang, Tao Chen, Emily Ji, Shreya Dixit, David Proctor, Sungrok Shim, Jonathan Kraft, Vincent Zhang, Caiming Xiong, Richard Socher, and Dragomir Radev. 2019b. SParC: Cross-domain semantic parsing in context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4511–4523, Florence, Italy. Association for Computational Linguistics. John M Zelle and Raymond J Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the national conference on artificial intelligence, pages 1050–1055. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103. 6052 A API Documentation Pre-processing Here we describe detailed heuristics used for API documentation preprocessing. The goal is to harvest NL-code pairs with API docs as a source. A.1 Arguments Most APIs will have arguments, either required or optional. For the required arguments, we leave them “as-is”. We deal with two types of optional arguments, positional arguments and keyword arguments through permutation and sampling. In the Python documentation, optional positional arguments are bracketed in “[.., [..]]”. Nested brackets are commonly used to represent more than one possible optional positional arguments. Another type of optional arguments are implemented using keyword arguments in the form of key=default. In real usage, developers usually only provide none or some of those arguments. To simulate this, we permute all possible combinations of the optional arguments, and append them to the required arguments. For example, if the code signature in the documentation writes “collections.deque([iterable[, maxlen]])”, we produce all 3 possible usages: “collections.deque()”, “collections.deque(iterable)”, and “collections.deque(iterable, maxlen)”. For keyword arguments like “heapq.nlargest(n, iterable, key=None)”, we will also include “heapq.nlargest(n, iterable)” in addition. The total number of permutations is n+1 for a function with n optional positional arguments, and 2n = n 0  + n 1  + ... + n n  for a function with n optional keyword arguments, which leads to exponentially large number of samples for functions with many optional keywords. Motivated by the observation that developers rarely specify all of the optional arguments, but rather tend to use default values, we only keep the top 10 permutations with the least number of optional arguments. A.2 Class Initializers and Methods Other heuristics are used to transform code signatures related to classes to emulate real usage. For class initializers in the documentation, we construct an assignment statement with lowercased variable name using the first character of the class name to store the instantiated class, e.g. d = collections.deque(iterable). For class methods, we prepend a heuristically created variable name to the method call, emulating a real method call on an instantiated class, e.g. d.append(x). A.3 Documentation Official documentation tends to be verbose for clarity, while real questions from developers are usually succinct. Thus we use the following heuristics to keep only sentences in the document that are necessary for generating the code as the intent text. We include the first sentence because it usually describes the functionality of the API. For each argument in the emulated API usage code snippet, we include the first sentence in the documentation that mentions the argument through string matching. For arguments not mentioned in the documentation, we add a sentence in the end: “With arguments ’arg name’ ...” to ensure all arguments are covered verbatim in the intent text.
2020
538
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6053–6065 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6053 LogicalFactChecker: Leveraging Logical Operations for Fact Checking with Graph Module Network Wanjun Zhong1∗, Duyu Tang2, Zhangyin Feng3∗, Nan Duan2, Ming Zhou2 Ming Gong2, Linjun Shou2, Daxin Jiang2, Jiahai Wang1 and Jian Yin1 1 The School of Data and Computer Science, Sun Yat-sen University. Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou, P.R.China 2 Microsoft Corporation 3 Harbin Institute of Technology {zhongwj25@mail2,wangjiah@mail,issjyin@mail}.sysu.edu.cn {dutang,nanduan,mingzhou,migon,lisho,djiang}@microsoft.com [email protected] Abstract Verifying the correctness of a textual statement requires not only semantic reasoning about the meaning of words, but also symbolic reasoning about logical operations like count, superlative, aggregation, etc. In this work, we propose LogicalFactChecker, a neural network approach capable of leveraging logical operations for fact checking. It achieves the state-of-the-art performance on TABFACT, a large-scale, benchmark dataset built for verifying a textual statement with semi-structured tables. This is achieved by a graph module network built upon the Transformer-based architecture. With a textual statement and a table as the input, LogicalFactChecker automatically derives a program (a.k.a. logical form) of the statement in a semantic parsing manner. A heterogeneous graph is then constructed to capture not only the structures of the table and the program, but also the connections between inputs with different modalities. Such a graph reveals the related contexts of each word in the statement, the table and the program. The graph is used to obtain graphenhanced contextual representations of words in Transformer-based architecture. After that, a program-driven module network is further introduced to exploit the hierarchical structure of the program, where semantic compositionality is dynamically modeled along the program structure with a set of function-specific modules. Ablation experiments suggest that both the heterogeneous graph and the module network are important to obtain strong results. 1 Introduction Fact checking for textual statements has emerged as an essential research topic recently because of the unprecedented amount of false news and rumors spreading through the internet (Thorne et al., 2018; ∗Work done while this author was an intern at Microsoft Research. Year Venue Winner Score 2005 Arlandastad David Patrick 272 2004 Arlandastad Matthew King 270 2003 Falsterbo Titch Moore 273 2002 Halmstad Thomas Besancenez 279 Table Statement In 2004, the score is less than 270. Label REFUTED Program 𝑙𝑒𝑠𝑠( ℎ𝑜𝑝( 𝑓𝑖𝑙𝑡𝑒𝑟_𝑒𝑞( 𝑌𝑒𝑎𝑟; 2004); 𝑆𝑐𝑜𝑟𝑒); 270) Figure 1: An example of table-based fact checking. Given a statement and a table as the input, the task is to predict the label. Program reflects the underlying meaning of the statement, which should be considered for fact checking. Chen et al., 2019; Goodrich et al., 2019; Nakamura et al., 2019; Kry´sci´nski et al., 2019; Vaibhav et al., 2019). Online misinformation may manipulate people’s opinions and lead to significant influence on essential social events like political elections (Faris et al., 2017). In this work, we study fact checking, with the goal of automatically assessing the truthfulness of a textual statement. The majority of previous studies in fact checking mainly focused on making better use of the meaning of words, while rarely considered symbolic reasoning about logical operations (such as “count”, “superlative”, “aggregation”). However, modeling logical operations is an essential step towards the modeling of complex reasoning and semantic compositionality. Figure 1 shows a motivating example for table-based fact checking, where the evidence used for verifying the statement comes from a semi-structured table. We can see that correctly verifying the statement “In 2004, the score is less than 270” requires a system to not only discover the connections between tokens in the statement and the table, but more importantly understand the meaning of logical operations and how they interact in a structural way to form a whole. Under this 6054 Table Statement In 2004, the score is less than 270. REFUTED Program 𝑙𝑒𝑠𝑠( ℎ𝑜𝑝𝑓𝑖𝑙𝑡𝑒𝑟_𝑒𝑞𝑌𝑒𝑎𝑟; 2004 ; 𝑆𝑐𝑜𝑟𝑒; 270) Semantic Parser Graph Construction Contextual Representations Semantic Compositionality Year Venue Winner Score 2005 Arlandastad David Patrick 272 2004 Arlandastad Matthew King 270 2003 Falsterbo Titch Moore 273 2002 Halmstad Thomas Besancenez 279 … Figure 2: An overview of our approach LogicalFactChecker. It includes a semantic parser to generate program (§ 3.5), a graph construction mechanism (§ 3.2), a graph-based contextual representation learning for tokens (§ 3.3) and a semantic composition model over the program by neural module network (§ 3.4). consideration, we use table-based fact checking as the testbed to investigate how to exploit logical operations in fact checking. In this paper, We present LogicalFactChecker, a neural network approach that leverages logical operations for fact checking when semi-structured tables are given as evidence. Taking a statement and a table as the input, it first derives a program, also known as the logical form, in a semantic parsing manner (Liang, 2016). Then, our system builds a heterogeneous graph to capture the connections among the statement, the table and the program. Such connections reflect the related context of each token in the graph, which are used to define attention masks in a Transformer-based (Vaswani et al., 2017) framework. The attention masks are used to learn graph-enhanced contextual representations of tokens1. We further develop a program-guided neural module network to capture the structural and compositional semantics of the program for semantic compositionality. (Socher et al., 2013; Andreas et al., 2015). Graph nodes, whose representations are computed using the contextual representations of their constituents, are considered as arguments, and logical operations are considered as modules to recursively produce representations of higher level nodes along the program. Experiments show that our system outperforms previous systems and achieves the state-of-the-art verification accuracy. The contributions of this paper can be summarized as follows: • We propose LogicalFactChecker, a graphbased neural module network, which utilizes logical operations for fact-checking. 1Here, tokens includes word pieces in the statement, table column names, table row names, table cells, and the program. • Our system achieves the state-of-the-art performance on TABFACT, a large-scale and benchmark dataset for table-based fact checking. • Experiments show that both the graphenhanced contextual representation learning mechanism and the program-guided semantic compositionality learning mechanism improve the performance. 2 Task Definition We study the task of table-based fact checking in this paper. This task is to assess the veracity of a statement when a table is given as evidence. Specifically, we evaluate our system on TABFACT (Chen et al., 2019), a large benchmark dataset for tablebased fact checking. With a given semi-structured table and a statement, systems are required to perform reasoning about the structure and content of the table and assess whether the statement is “ENTAILED” or “REFUTED” by the table. The official evaluation metric is the accuracy for the two-way classification (ENTAILED/REFUTED). TABFACT consists of 118,439 statements and 16,621 tables from Wikipedia. More details about the dataset are given in Appendix A. 3 LogicalFactChecker: Methodology In this section, we present our approach LogicalFactChecker, which simultaneously considers the meaning of words, inner structure of tables and programs, and logical operations for fact-checking. One way to leverage program information is to use standard semantic parsing methods, where automatically generated programs are directly executed on 6055 tables to get results. However, TABFACT does not provide annotated programs. This puts the problem in a weak-supervised learning setting, which is one of the major challenges in the semantic parsing field. In this work, we use programs in a soft way that programs are represented with neural modules to guide the reasoning process between a textual statement and a table. Figure 2 gives an overview of our approach. With a statement and a corresponding table, our system begins with program generation, which synthesizes a program. Then, we build a heterogeneous graph for capturing the inner structure of the input. With the constructed graph, we incorporate a graph-based attention mask into the Transformer for learning graph-enhanced token representations. Lastly, we learn the semantic compositionality by developing a program-guided neural module network and make the final prediction. This section is organized as follows. We first describe the format of the program (§ 3.1) for a more transparent illustration. After that, the graph construction approach (§ 3.2) is presented first, followed by a graph-enhanced contextual representation learning mechanism (§ 3.3). Moreover, we introduce how to learn semantic compositionality over the program by neural module network (§ 3.4). At last, we describe how to synthesize programs by our semantic parsing model (§3.5). 3.1 Program Representation Before presenting the technical details, we first describe the form of the program (also known as logical form) for clearer illustrations. With a given natural language statement, we begin by synthesizing the corresponding semantic representation (LISP-like program here) using semantic parsing techniques. Following the notation defined by Chen et al. (2019), the functions (logical operations) formulating the programs come from a fixed set of over 50 functions, including “count” and “argmax”, etc. The detailed description of the functions is given in Appendix C. Each function takes arguments of predefined types like string, number, bool or sub-table as input. The programs have hierarchical structure because the functions can be nested. Figure 3 shows an example of a statement and a generated program, accompanying with the derivation of the program and its semantic structure. The details of the generation of a program for a textual statement are introduced in § 3.5. 𝑙𝑒𝑠𝑠( ℎ𝑜𝑝( 𝑓𝑖𝑙𝑡𝑒𝑟_𝑒𝑞( 𝑌𝑒𝑎𝑟; 2004); 𝑆𝑐𝑜𝑟𝑒); 270) 𝑙𝑒𝑠𝑠( ∙; ∙) ℎ𝑜𝑝( ∙; ∙) 𝑓𝑖𝑙𝑡𝑒𝑟_𝑒𝑞( ∙; ∙) In 2004, the score is less than 270. Year Score 2004 270 Statement Program S → 𝑙𝑒𝑠𝑠(S1, ARG0) S1 → ℎ𝑜𝑝(S2, ARG1) S2 → 𝑓𝑖𝑙𝑡𝑒𝑟_𝑒𝑞(ARG2, ARG3) ARG0 → 270 ARG1 → Score ARG2 → Year ARG3 → 2004 (a) Derivation with basic operations (b) The structure of compositionality Figure 3: An example of a program with its semantic structure and derivation with basic logical operations. 3.2 Graph Construction In this part, we introduce how to construct a graph to explicitly reveal the inner structure of programs and tables, and the connections among statements and them. Figure 4 shows an example of the graph. Specifically, with a statement, a table and a proYear 2005 2004 2003 2002 Venue Arlandastad Arlandastad Falsterbo Halmstad Winner David Patrick Matthew King Titch Moore Thomas Besancenez Score 272 270 273 279 Row 0 Row 1 Row 2 Row 3 Statement Table 𝑙𝑒𝑠𝑠( ∙; ∙) ℎ𝑜𝑝( ∙; ∙) 𝑓𝑖𝑙𝑡𝑒𝑟_𝑒𝑞( ∙; ∙) Year Score 2004 270 Program In 2004, the score is less than 270. Figure 4: An example of the constructed graph. gram, our system operates in the following steps. • For a table, we define nodes as columns, cells, and rows, which is partly inspired by the design of the graph for table-based question answering (M¨uller et al., 2019). As shown in Figure 4, each cell is connected to its corresponding column node and row node. Cell nodes in the same row are fully-connected to each other. • Program is a naturally structural representation consisting of functions and arguments. In the program, functions and arguments are represented as nodes, and they are hierarchically connected along the structure. Each node is connected to its direct parents and children. 6056 Arguments are also linked to corresponding column names of the table. • By default, in the statement, all tokens are the related context of each other, so they are connected. To further leverage the connections from the statement to the table and the program, we add links for tokens which are linked to cells or columns in the table, and legitimate arguments in the program. After these processes, the extracted graph not only maintains the inner-structure of tables and programs but also explores the connections among aligned entities mentioned in different contents. 3.3 Graph-Enhanced Contextual Representations of Tokens We describe how to utilize the graph structure for learning graph-enhanced contextual representations of tokens 2. A simple way to learn contextual representations is to concatenate all the contents3 as a single string and use the original attention mask in Transformer, where all the tokens are regarded as the contexts for each token. However, this simple way fails to capture the semantic structure revealed in the constructed graph. For example, according to Figure 4, the content “2004” exists in the statement, program and table. These aligned entity nodes for “2004” should be more related with each other when our model calculate contextual representations. To address this problem, we use the graph structure to re-define the related contexts of each token for learning a graph-enhanced representation. Specifically, we present a graph-based mask matrix for self-attention mechanism in Transformer. The graph-based mask matrix G is a 0-1 matrix of the shape N × N, where N denotes the total number of tokens in the sequence. This graph-based mask matrix records which tokens are the related context of the current token. Gij is assigned as 1 if token j is the related context of token i in the graph and 0 otherwise. Then, the constructed graph-based mask matrix will be feed into BERT (Devlin et al., 2018) for learning graph-enhanced contextual representations. We use the graph-based mask to control 2In this work, tokens include word pieces in the statement, column names and row names and contents of cells in the table, and function names in the program 3All the contents indicate texts in the concatenated sequence of the linearized table, the statement, and the sequence of the linearized program. the contexts that each token can attend in the selfattention mechanism of BERT during the encoding process. BERT maps the input x of length T into a sequence of hidden vectors as follows. h(x) = [h(x)1, h(x)2, · · · , h(x)T ] (1) These representations are enhanced by the structure of the constructed graph. 3.4 Semantic Compositionality with Neural Module Network In the previous subsection, we describe how our system learns the graph-enhanced contextual representations of tokens. The process mentioned above learns the token-level semantic interaction. In this subsection, we make further improvement by learning logic-level semantics using program information. Our motivation is to utilize the structures and logical operations of programs for learning logicenhanced compositional semantics. Since the logical operations forming the programs come from a fixed set of functions, we design a modular and composable network, where each logical operation is represented as a tailored module and modules are composed along the program structure. We first describe how we initialize the representation for each entity node in the graph (§ 3.4.1). After that, we describe how to learn semantic compositionality based on the program, including the design of each neural module (§ 3.4.2) and how these modules are composed recursively along the structure of the program (§ 3.4.3). 3.4.1 Entity Node Representation In a program, entity nodes denote a set of entities (such as “David Patrick”) from input contexts while function nodes denote a set of logical operations (such as“filter equal”), both of which may contain multiple words/word-pieces. Therefore, we take graph-enhanced contextual representations as mentioned in §3.3 to initialize the representations of entity nodes. Specifically, we initialize the representation he of each entity node e by averaging the projected hidden vectors of each words contained in e as follows: he = 1 n n X i=0 relu(Weh(x)pie) (2) where n denotes the total number of tokens in the span of entity e, pi e denotes the position of the ith token, We ∈RF×D is a weight matrix, F is the 6057 dimension of feature vectors of arguments, D is the dimension of hidden vectors of BERT and relu is the activation function. 3.4.2 Modules In this part, we present function-specific modules, which are used as the basic computational units for composing all the required configurations of module network structures. Inspired by the neural module network (Andreas et al., 2015) and the recursive neural network (Socher et al., 2013), we implement each module with the same neural architecture but with different function-specific parameters. All the modules are trained jointly. Each module corresponds to a specific function, where the function comes from a fixed set of over 50 functions described before. In a program, each logical operation has the format of FUNCTION(ARG0, ARG1, ...), where each function may have variable-length arguments. For example, the function hop has 2 arguments while the function count has 1 argument. To handle variable-length arguments, we develop each module as follows. We first calculate the composition for each function-argument pair and then produce the overall representation via combining the representations of items. The calculation for each function-argument pair is implemented as matrix-vector multiplication, where each function is represented as a matrix and each argument is represented as a vector. This is inspired by vector-based semantic composition (Mitchell and Lapata, 2010), which states that matrix-vector multiplication could be viewed as the matrix modifying the meaning of vector. Specifically, the output ym of module m is computed with the following formula: ym = 1 Nm Nm X i=0 σ(Wmvi + bm) (3) where Wm ∈RF×F is a weight matrix and bm is a bias vector for a specific module m. Nm denotes the number of arguments of module m, and each vi ∈RF is the feature vector representing the ith input. σ is the activation function. Under the aforementioned settings, modules can compose into a hierarchical network determined by the semantic structure of the parsed program. 3.4.3 Program-Guided Semantic Compositionality In this part, we introduce how to compose a program-guided neural module network based on the structure of programs and predefined modules. Taking the structure of the program and representations of all the entity nodes as the input, the composed neural module network learns the compositionality of the program for the final prediction. Figure 5 shows an example of a composed network based on the structure of the program. 𝑙𝑒𝑠𝑠( ℎ𝑜𝑝( 𝑓𝑖𝑙𝑡𝑒𝑟_𝑒𝑞( 𝑌𝑒𝑎𝑟; 2004); 𝑆𝑐𝑜𝑟𝑒); 270) In 2004, the score is less than 270. Statement Program 𝑓𝑖𝑙𝑡𝑒𝑟_𝑒𝑞 ℎ𝑜𝑝 𝑐ℎ𝑜𝑜𝑠𝑒 𝑙𝑒𝑠𝑠 𝑎𝑛𝑑 𝑜𝑟 𝑒𝑥𝑖𝑠𝑡 … Year 2004 Score 𝑓𝑖𝑙𝑡𝑒𝑟_𝑒𝑞 ℎ𝑜𝑝 𝑐ℎ𝑜𝑜𝑠𝑒 𝑙𝑒𝑠𝑠 𝑎𝑛𝑑 𝑜𝑟 𝑒𝑥𝑖𝑠𝑡 … 𝑓𝑖𝑙𝑡𝑒𝑟_𝑒𝑞 ℎ𝑜𝑝 𝑐ℎ𝑜𝑜𝑠𝑒 𝑙𝑒𝑠𝑠 𝑎𝑛𝑑 𝑜𝑟 𝑒𝑥𝑖𝑠𝑡 … 270 Figure 5: An example of neural module network. Along the structure of the program, each step of compositionality learning is to select a module from a fixed set of parameterized modules defined in § 3.4.2 and operate on it with Equation 3 to dynamically generate a higher-level representation. The above process will be operated recursively until the output of the top-module is generated, which is denoted as ytop m . After that, we make the final prediction by feeding the combination of ytop m and the final hidden vector h(x)T from § 3.3 through an MLP (Multilayer Perceptron) layer. The motivation of this operation is to retain the complete semantic meaning of the whole contexts because some linguistic cues are discarded during the synthesizing process of the program. 3.5 Program Generation In this part, we describe our semantic parser for synthesizing a program for a textual statement. We tackle the semantic parsing problem in a weaklysupervised setting (Berant et al., 2013; Liang et al., 2017; Misra et al., 2018), since the ground-truth program is not provided. 6058 Model Val Test Test (simple) Test (complex) Small Test Human Performance 92.1 Majority Guess 50.7 50.4 50.8 50.0 50.3 BERT classifier w/o Table 50.9 50.5 51.0 50.1 50.4 Table-BERT (Horizontal-S+T-Concatenate) 50.7 50.4 50.8 50.0 50.3 Table-BERT (Vertical-S+T-Template) 56.7 56.2 59.8 55.0 56.2 Table-BERT (Vertical-T+S-Template) 56.7 57.0 60.6 54.3 55.5 Table-BERT (Horizontal-S+T-Template) 66.0 65.1 79.0 58.1 67.9 Table-BERT (Horizontal-T+S-Template) 66.1 65.1 79.1 58.2 68.1 LPA-Voting w/o Discriminator 57.7 58.2 68.5 53.2 61.5 LPA-Weighted-Voting w/ Discriminator 62.5 63.1 74.6 57.3 66.8 LPA-Ranking w/ Discriminator 65.2 65.0 78.4 58.5 68.6 LogicalFactChecker (program from LPA) 71.7 71.6 85.5 64.8 74.2 LogicalFactChecker (program from Seq2Action) 71.8 71.7 85.4 65.1 74.3 Table 1: Performance on TABFACT in terms of label accuracy (%). The performances of Table-BERT and LPA are reported by Chen et al. (2019). Our system is abbreviated as LogicalFactChecker, with program generated via our Sequence-to-Action model and baseline (i.e. LPA), respectively. T, S indicate the table, the statement and + means the order of concatenation. In the linearization of tables, Horizontal (Vertical) refers to the horizontal (vertical) order for concatenating the cells. Concatenate (Template) means concatenating the cells directly (filling the cells into a template). In LPA settings, (Weighted) Voting means assigning each program with (score-weighted) equal weight to vote for the final result. Ranking means using the result generated by the top program ranked by the discriminator. As shown in Figure 3, a program in TABFACT is structural and follows a grammar with over 50 functions. To effectively capture the structure of the program and also generate legitimate programs following a grammar in the generation process, we develop a sequence-to-action approach, which is proven to be effective in solving many semantic parsing problems (Chen et al., 2018; Iyer et al., 2018; Guo et al., 2018). The basic idea is that the generation of a program tree is equivalent to the generation of a sequence of action, which is a traversal of the program tree following a particular order, like depth-first, left-to-right order. Specifically, our semantic parser works in a top-down manner in a sequence-to-sequence paradigm. The generation of a program follows an ASDL grammar (Yin and Neubig, 2018), which is given in Appendix C. At each step in the generation phase, candidate tokens to be generated are only those legitimate according to the grammar. Parent feeding (Yin and Neubig, 2017) is used for directly passing information from parent actions. We further regard column names of the table as a part of the input (Zhong et al., 2017) to generate column names as program arguments. We implement the approach with the LSTMbased recurrent network and Glove word vectors (Pennington et al., 2014) in this work, and the framework could be easily implemented with Transformer-based framework. Following Chen et al. (2019), we employ the label of veracity to guide the learning process of the semantic parser. We also employ programs produced by LPA (Latent Program Algorithm) for comparison, which is provided by Chen et al. (2019). In the training process, we train the semantic parser and the claim verification model separately. The training of semantic parser includes two steps: candidate search and sequence-to-action learning. For candidate search, we closely follow LPA by first collecting a set of programs which could derive the correct label and then using the trigger words to reduce the number of spurious programs. For learning of the semantic parser, we use the standard way with back propagation, by treating each (claim, table, positive program) as a training instance. 4 Experiments We evaluate our system on TABFACT (Chen et al., 2019), a benchmark dataset for table-based fact checking. Each instance in TABFACT consists of a statement, a semi-structured Wikipedia table and a label (“ENTAILED” or “REFUTED”) indicates whether the statement is supported by the table or 6059 not. The primary evaluation metric of TABFACT is label accuracy. The statistics of TABFACT are given in Appendix A. Detailed hyper-parameters for model training are given in Appendix B for better reproducibility of experiments. We compare our system with following baselines, including the textual matching based baseline Table-BERT and semantic parsing based baseline LPA, both of which are developed by Chen et al. (2019). • Table-BERT tackles the problem as a matching problem. It takes the linearized table and the statement as the input and employs BERT to predict a binary class. • Latent Program Algorithm (LPA) formulates the verification problem as a weakly supervised semantic parsing problem. With a given statement, it operates in two step: (1) latent program search for searching executable program candidates and (2) transformer-based discriminator selection for selecting the most consistent program. The final prediction is made by executing the selected program. 4.1 Model Comparison In Table 1, we compare our model (LogicalFactChecker) with baselines on the development set and test set. It is worth noting that complex test set and simple test set are partitioned based on its collecting channel, where the former involves higher-order logic and more complex semantic understanding. As shown in Table 1, our model with programs generated by Sequence-to-Action model, significantly outperforms previous systems with 71.8% label accuracy on the development set and 71.7% on the test set, and achieves the state-of-theart performance on the TABFACT dataset. 4.2 Ablation Study We conduct ablation studies to evaluate the effectiveness of different components in our model. Model Label Acc. (%) Val Test LogicalFactChecker 71.83 71.69 -w/o Graph Mask 70.06 70.13 -w/o Compositionality 69.62 69.61 Table 2: Ablation studies on the development set and the test set. As shown in Table 2, we evaluate LogicalFactChecker under following settings: (1) removing the graph-based mask described in § 3.3 (the first row); (2) removing the program-guided compositionality learning mechanism described in § 3.4 (the second row). Table 2 shows that, eliminating the graph-based mask drops the accuracy by 1.56% on test set. Removing the program-guided compositionality learning mechanism drops the accuracy by 2.08% on test set, which reflects that the neural module network plays a more important role in our approach. This observation verifies that both mechanisms are beneficial for our task. 4.3 Case Study We conduct a case study by giving an example shown in Figure 6. From the example, we can see that our system synthesizes a semantic-consistent program of the given statement and make the correct prediction utilizing the synthesized program. This observation reflects that our system has the ability to (1) find a mapping from the textual cues to a complex function (such as the mapping from “most points” to function “argmax”) and (2) derive the structure of logical operations to represent the semantic meaning of the whole statement. Position Pilot Country Points 1 Sebastian Kawa Poland 69 2 Carlos Rocca Chile 55 3 Mario Kiessling Germany 47 4 Uli Schwenk Germany 40 Table Statement The country with the most points is Poland. Predict Program 𝑒𝑞( 𝑃𝑜𝑙𝑎𝑛𝑑; ℎ𝑜𝑝𝑎𝑟𝑔𝑚𝑎𝑥𝑃𝑜𝑖𝑛𝑡𝑠; 𝐶𝑜𝑢𝑛𝑡𝑟𝑦) ENTAILED Figure 6: A case study of our approach. 4.4 Error Analysis We randomly select 400 instances and summarize the major types of errors, which can be considered as future directions for further study. The dominant type of errors is caused by the misleading programs generated by the semantic parser. As shown in the example in Figure 7 (a), the semantic parser fails to generate a semantically correct program because it lacks the external knowledge about the date in the table and the “new year eve” in the statement. The second type of errors is caused by semantic compositionality, even though 6060 Date Visiting Team Host Team Score Sep. 25 New York Giants San Diego Chargers 23-45 Oct. 16 Houston Texans Seattle Seahawks 10-42 Dec. 11 Detroit Lions Green Bay Packers 13-16 Jan. 1 St. Louis Rams Dallas Cowboys 20-10 … The visiting team is the New York Giant on new year eve and St. Louis Rams in New Year’s day Player Country Score Juli Inkster United States 65 Momoko Ueda Japan 66 Laura Diaz United States 66 Ji Young South Korea 66 … There are 3 players total from the United States. (b) Name Team Best Tristan Gommendy Pkv Racing 1:16.776 Will Power Team Australia 1:16.841 Neel Jani Pkv Racing 1:16.931 Paul Tracy Forsythe Racing 1:17.629 … The difference in time of the best time for Tristan Gommendy and Will Power is 0.065 𝒆𝒒( 𝒄𝒐𝒖𝒏𝒕( 𝒇𝒊𝒍𝒕𝒆𝒓_𝒆𝒒( 𝑽𝒊𝒔𝒊𝒕𝒊𝒏𝒈𝑻𝒆𝒂𝒎;𝑵𝒆𝒘𝒀𝒐𝒓𝒌 𝑮𝒊𝒂𝒏𝒕𝒔));𝒄𝒐𝒖𝒏𝒕( 𝒇𝒊𝒍𝒕𝒆𝒓_𝒆𝒒( 𝑽𝒊𝒔𝒊𝒕𝒊𝒏𝒈𝑻𝒆𝒂𝒎;𝑺𝒕. 𝑳𝒐𝒖𝒊𝒔𝑹𝒂𝒎))) 𝒆𝒒( 𝟑;𝒄𝒐𝒖𝒏𝒕( 𝒇𝒊𝒍𝒕𝒆𝒓_𝒆𝒒(𝑪𝒐𝒖𝒏𝒕𝒓𝒚;𝑼𝒏𝒊𝒕𝒆𝒅𝑺𝒕𝒂𝒕𝒆𝒔))) 𝒆𝒒( 𝒉𝒐𝒑( 𝒇𝒊𝒍𝒕𝒆𝒓𝒆𝒒( 𝑵𝒂𝒎𝒆;𝑻𝒓𝒊𝒔𝒕𝒂𝒏𝑮𝒐𝒎𝒎𝒆𝒏𝒅𝒚);𝑩𝒆𝒔𝒕); 𝒉𝒐𝒑( 𝒇𝒊𝒍𝒕𝒆𝒓𝒆𝒒( 𝑵𝒂𝒎𝒆;𝑾𝒊𝒍𝒍𝑷𝒐𝒘𝒆𝒓);𝑩𝒆𝒔𝒕)) (c) (a) Table Statement Program Figure 7: Examples of error types, including (a) predicting a wrong program because of the lack of background knowledge, (b) predicting a correct program but predicting a wrong label, and (c) that the logical operations required to understand the statement is not covered in the grammar. programs are correctly predicted. As shown in Figure 7 (b), the program involves operations requiring complex reasoning, like counting the exact number of rows. Potential ways to alleviate this problem is to design more function-specific modules like Andreas et al. (2015). The third type of errors is caused by the coverage of the logical operations we used. In this work, we follow Chen et al. (2019) and use exactly the same functions. However, as shown in 7 (c), understanding this statement requires the function of difference time, which is not covered by the current set. 5 Related Work There is a growing interest in fact checking in NLP with the rising importance of assessing the truthfulness of texts, especially when pre-trained language models (Radford et al., 2019; Zellers et al., 2019; Keskar et al., 2019) are more and more powerful in generating fluent and coherent texts. Previous studies in the field of fact checking differ in the genres of supporting evidence used for verification, including natural language (Thorne et al., 2018), semi-structured tables (Chen et al., 2019), and images (Zlatkova et al., 2019; Nakamura et al., 2019). The majority of previous works deal with textual evidence. FEVER (Thorne et al., 2018) is one of the most influential datasets in this direction, where evidence sentences come from 5.4 million Wikipedia documents. Systems developed on FEVER are dominated by pipelined approaches with three separately trained models, i.e. document retrieval, evidence sentence selection, and claim verification. There also exist approaches (Yin and Roth, 2018) that attempt to jointly learn evidence selection and claim verification. More recently, the second FEVER challenge (Thorne et al., 2019) is built for studying adversarial attacks in fact checking4. Our work also relates to fake news detection. For example, Rashkin et al. (2017) study fact checking by considering stylistic lexicons, and Wang (2017) builds LIAR dataset with six finegrained labels and further uses meta-data features. There is a fake news detection challenge5 hosted in WSDM 2019, with the goal of the measuring the truthfulness of a new article against a collection of existing fake news articles before being published. There are very recent works on assessing the factual accuracy of the generated summary in neural abstractive summarization systems (Goodrich et al., 2019; Kry´sci´nski et al., 2019), as well as the use of this factual accuracy as a reward to improve abstractive summarization (Zhang et al., 2019). Chen et al. (2019) recently release TABFACT, a large dataset for table-based fact checking. Along with releasing the great dataset, they provide two baselines: Table-BERT and LPA. Table-BERT is a textual matching based approach, which takes the linearized table and statement as inputs and states the veracity. However, Table-BERT fails to utilize logical operations. LPA is a semantic parsing based approach, which first synthesizes programs by latent program search and then ranks candidate programs with a neural-based discriminator. However, the ranking step in LPA does not consider the table information. Our approach simultaneously utilizes the logical operations for semantic compositionality and the connections among tables, programs, and statements. Results show that our approach achieves the state-of-the-art performance on TABFACT. 4http://fever.ai/ 5https://www.kaggle.com/c/ fake-news-pair-classification-challenge/ 6061 6 Conclusion In this paper, we present LogicalFactChecker, a neural network based approach that considers logical operations for fact checking. We evaluate our system on TABFACT, a large-scale benchmark dataset for verifying textual statements over semi-structured tables, and demonstrate that our approach achieves the state-of-the-art performance. LogicalFactChecker has a sequence-to-action semantic parser for generating programs, and builds a heterogeneous graph to capture the connections among statements, tables, and programs. We utilize the graph information with two mechanisms, including a mechanism to learn graph-enhanced contextual representations of tokens with graphbased attention mask matrix, and a neural module network which learns semantic compositionality in a bottom-up manner with a fixed set of modules. We find that both graph-based mechanisms are beneficial to the performance, and our sequenceto-action semantic parser is capable of generating semantic-consistent programs. Acknowledgement Wanjun Zhong, Jiahai Wang and Jian Yin are supported by the National Natural Science Foundation of China (U1711262, U1611264,U1711261,U1811261,U1811264, U1911203), National Key R&D Program of China (2018YFB1004404), Guangdong Basic and Applied Basic Research Foundation (2019B1515130001), Key R&D Program of Guangdong Province (2018B010107005). The corresponding author is Jian Yin. References Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2015. Deep compositional question answering with neural module networks. ArXiv, abs/1511.02799. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544. Bo Chen, Le Sun, and Xianpei Han. 2018. Sequenceto-action: End-to-end semantic graph generation for semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 766– 777, Melbourne, Australia. Association for Computational Linguistics. Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2019. Tabfact: A largescale dataset for table-based fact verification. arXiv preprint arXiv:1909.02164. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Robert Faris, Hal Roberts, Bruce Etling, Nikki Bourassa, Ethan Zuckerman, and Yochai Benkler. 2017. Partisanship, propaganda, and disinformation: Online media and the 2016 us presidential election. Ben Goodrich, Vinay Rao, Peter J Liu, and Mohammad Saleh. 2019. Assessing the factual accuracy of generated text. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 166–175. ACM. Daya Guo, Duyu Tang, Nan Duan, Ming Zhou, and Jian Yin. 2018. Dialog-to-action: conversational question answering over a large-scale knowledge base. In Advances in Neural Information Processing Systems, pages 2942–2951. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2018. Mapping language to code in programmatic context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1643–1652, Brussels, Belgium. Association for Computational Linguistics. Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858. Wojciech Kry´sci´nski, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Evaluating the factual consistency of abstractive text summarization. arXiv preprint arXiv:1910.12840. Chen Liang, Jonathan Berant, Quoc Le, Kenneth D Forbus, and Ni Lao. 2017. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. ACL. Percy Liang. 2016. Learning executable semantic parsers for natural language understanding. arXiv preprint arXiv:1603.06677. Dipendra Misra, Ming-Wei Chang ad Xiaodong He, and Wen tau Yih. 2018. Policy shaping and generalized update equations for semantic parsing from denotations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brusses, Belgium. Association for Computational Linguistics. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive science, 34(8):1388–1429. 6062 Thomas M¨uller, Francesco Piccinno, Massimo Nicosia, Peter Shaw, and Yasemin Altun. 2019. Answering conversational questions on structured data without logical forms. arXiv preprint arXiv:1908.11787. Kai Nakamura, Sharon Levy, and William Yang Wang. 2019. r/fakeddit: A new multimodal benchmark dataset for fine-grained fake news detection. arXiv preprint arXiv:1911.03854. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. 2017. Truth of varying shades: Analyzing language in fake news and political fact-checking. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2931–2937. Richard Socher, John Bauer, Christopher D Manning, and Andrew Y Ng. 2013. Parsing with compositional vector grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 455–465. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. Fever: a large-scale dataset for fact extraction and verification. arXiv preprint arXiv:1803.05355. James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, and Arpit Mittal. 2019. The FEVER2.0 shared task. In Proceedings of the Second Workshop on Fact Extraction and VERification (FEVER), pages 1–6, Hong Kong, China. Association for Computational Linguistics. Vaibhav Vaibhav, Raghuram Mandyam Annasamy, and Eduard Hovy. 2019. Do sentence interactions matter? leveraging sentence level representations for fake news classification. arXiv preprint arXiv:1910.12203. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. William Yang Wang. 2017. ” liar, liar pants on fire”: A new benchmark dataset for fake news detection. arXiv preprint arXiv:1705.00648. Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. arXiv preprint arXiv:1704.01696. Pengcheng Yin and Graham Neubig. 2018. Tranx: A transition-based neural abstract syntax parser for semantic parsing and code generation. arXiv preprint arXiv:1810.02720. Wenpeng Yin and Dan Roth. 2018. Twowingos: A twowing optimization strategy for evidential claim verification. arXiv preprint arXiv:1808.03465. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. arXiv preprint arXiv:1905.12616. Yuhao Zhang, Derek Merck, Emily Bao Tsai, Christopher D Manning, and Curtis P Langlotz. 2019. Optimizing the factual correctness of a summary: A study of summarizing radiology reports. arXiv preprint arXiv:1911.02541. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103. Dimitrina Zlatkova, Preslav Nakov, and Ivan Koychev. 2019. Fact-checking meets fauxtography: Verifying claims about images. arXiv preprint arXiv:1908.11722. A Statistic of TABFACT Split #Sentence Table Avg. Row Avg. Col Train 92,283 13,182 14.1 5.5 Val 12,792 1,696 14.0 5.4 Test 12,779 1,695 14.2 5.4 Table 3: Basic statistics of Train/Val/Test split in the dataset B Training Details In this part, we describe the training details of our experiments. As described before, the semantic parser and statement verification model are trained separately. We first introduce the training process of the semantic parser. Both training and validation datasets are created in a same way as described in § 3.5. Specifically, each pair of data is labeled as true or false. Finally, the training dataset contains 495,131 data pairs, and the validation dataset contains 73,792 data pairs. We implement the approach with the LSTM-based recurrent network and use the following set of hyper parameters to train models: hidden size is 256, learning rate is 0.001, learning rate decay is 0.5, dropout is 0.3, batch size is 150. We use glove embedding to 6063 initialize embedding and use Adam to update the parameters. We use beam search during inference and set beam size as 15. We use BLEU to select the best checkpoint by validation scores. Then we introduce the training details of statement verification model. We employ cross-entropy loss as the loss function. We apply AdamW as the optimizer for model training. In order to directly compare with Table-BERT, we also employ BERTBase as the backbone of our approach. The BERT network and neural module network are trained jointly. We set learning rate as 1e-5, batch size as 8 and set max sequence length as 512. The training time for one epoch is 1.2 hours by 4 P40 GPUs. We set the dimension of entity node representation as 200. C ASDL-Grammar In this part, we introduce the ASDL grammar (Yin and Neubig, 2018) we apply for synthesizing the programs in Seq2Action model. The definition of functions mainly follows Chen et al. (2019). Details can be found in following two pages.6 6The function “filter eq” contains three arguments (subtable, column name, value), but we ignore the first argument in the running example for a clearer illustration. 6064 Composite Type  Constructor  Fields  OutBool  Bool  pr_bool bool  none  OutStr str  only  OutRow row  zero  OutNum num  after  OutRow row1, OutRow row2  before  OutRow row1, OutRow row2  first  OutRow row1, OutRow row2  second  OutRow row1, OutRow row2  third  OutRow row1, OutRow row2  fourth  OutRow row1, OutRow row2  fifth  OutRow row1, OutRow row2  last  OutRow row1, OutRow row2  greater  OutNum num1, OutNum num2  less  OutNum num1, OutNum num2  eq  OutStr str1, OutStr str2  not_eq  OutStr str1, OutStr str2  and  OutBool bool1, OutBool bool2  within  OutRow row, pr_header header, OutStr str  not_within  OutRow row, pr_header header, OutStr str  all_eq  OutRow row, pr_header header, OutStr str  all_not_eq  OutRow row, pr_header header, OutStr str  all_less  OutRow row, pr_header header, OutNum num  all_less_eq  OutRow row, pr_header header, OutNum num  all_greater  OutRow row, pr_header header, OutNum num  all_greater_eq  OutRow row, pr_header header, OutNum num  OutRow  Row  pr_row row  top  OutRow row  bottom  OutRow row  argmax  OutRow row, pr_header header  argmin  OutRow row, pr_header header  filter_eq  OutRow row, pr_header header, OutStr str  filter_not_eq  OutRow row, pr_header header, OutStr str  filter_less  OutRow row, pr_header header, OutNum num  filter_greater  OutRow row, pr_header header, OutNum num  filter_greater_eq  OutRow row, pr_header header, OutNum num  filter_less_eq  OutRow row, pr_header header, OutNum num                      6065 OutNum  Num  pr_number num  count  OutRow row  half  OutRow row  one_third  OutRow row  inc_num  OutNum num  uniq  OutRow row, pr_header header  avg  OutRow row, pr_header header  sum  OutRow row, pr_header header  max  OutRow row, pr_header header  min  OutRow row, pr_header header  diff  OutNum num1, OutNum num2  add  OutNum num1, OutNum num2  OutStr  Str  pr_str str  hop  OutRow row, pr_header header  most_freq  OutRow row, pr_header header  OutNone  dec_num  OutNum num 
2020
539
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 583–592 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 583 End-to-End Neural Pipeline for Goal-Oriented Dialogue Systems using GPT-2 Donghoon Ham1∗, Jeong-Gwan Lee1∗, Youngsoo Jang1, Kee-Eung Kim1,2 1School of Computing, KAIST, Daejeon, Republic of Korea 2Graduate School of AI, KAIST, Daejeon, Republic of Korea {dhham, jglee, ysjang}@ai.kaist.ac.kr, [email protected] Abstract The goal-oriented dialogue system needs to be optimized for tracking the dialogue flow and carrying out an effective conversation under various situations to meet the user goal. The traditional approach to building such a dialogue system is to take a pipelined modular architecture, where its modules are optimized individually. However, such an optimization scheme does not necessarily yield an overall performance improvement of the whole system. On the other hand, end-to-end dialogue systems with monolithic neural architecture are often trained only with input-output utterances, without taking into account the entire annotations available in the corpus. This scheme makes it difficult for goal-oriented dialogues where the system needs to be integrated with external systems or to provide interpretable information about why the system generated a particular response. In this paper, we present an end-to-end neural architecture for dialogue systems that addresses both challenges above. Our dialogue system achieved the success rate of 68.32%, the language understanding score of 4.149, and the response appropriateness score of 4.287 in human evaluations, which ranked the system at the top position in the end-to-end multi-domain dialogue system task in the 8th dialogue systems technology challenge (DSTC8). 1 Introduction The goal-oriented dialogue system helps users achieve their goals such as requesting information or executing commands via natural language conversations. It is thus crucial for the dialogue system to keep track of the dialogue flow and carry out an effective conversation, even when the user goal is complicated or the dialogue flow is suddenly changed. ∗: Equal contribution The traditional approach to building a goaloriented dialogue system mostly adopts a pipelined modular architecture, with the natural language understanding (NLU) module (Kim et al., 2017; Lee et al., 2019b) that first recognizes and comprehends user’s intent and extracts values for slots, then the dialogue state tracking (DST) module (Williams et al., 2013) that tracks the values of slots, then the dialogue policy (POL) module that decides the system action, and then finally the natural language generation (NLG) module (Wen et al., 2015) that generates the utterance that corresponds to the system action. In some cases, multiple modules are combined together, e.g. the Word-level DST (Ramadan et al., 2018; Wu et al., 2019; Lee et al., 2019a) which maps the dialogue history to the dialogue state (the composite function of NLU and DST), and the Word-level POL (Budzianowski et al., 2018; Pei et al., 2019; Chen et al., 2019; Mehri et al., 2019; Zhao et al., 2019) which maps the previous utterance and dialogue state to the system response (the composite function of POL and NLG). These modules are usually optimized separately, which does not necessarily lead to an overall optimized performance for successful task completion. On the other hand, end-to-end neural models for dialogue systems (Madotto et al., 2018; Lei et al., 2018) enjoy a straightforward training approach to generating system responses, but it is difficult for goal-oriented dialogues where the system needs to interact with external systems or to generate an explanation that supports why the system generated a particular response. In this paper, we present an end-to-end neural architecture for dialogue systems that addresses both challenges above. Our work is based on fine-tuning GPT-2 (Radford et al., 2019) to faithfully perform the following essential dialogue management steps in a sequential manner using a single model: (1) 584 Domain : “restaurant” [usr] Are there any restaurants that serve proper British food in town? [sys] Oh yes quite a few. Which part of town will you be dining in? [usr] West, if possible. [sys] We have three: Graffiti, Saint John‘s Chop House, and Traveller. [usr] Will you give me the phone number, address, and postcode for Graffiti? [sys] The phone number is 01223277977. The address is Hotel Felix Whitehouse Lane Hunthindon Road and post code is cb30lx. Want to book? [usr] 6 people, please. [sys] Booking is successful. Reference number is 00000081. “ref” : “00000081” “name” : “graffiti” “phone” : “01223277977” ”postcode” : “cb30lx” “address” : “Hotel Felix Whitehouse Lane Hunthindon Road” “food” : “british” “area” : “west” “ref” : “00000084” “name” : grafton hotel restaurant “phone” : “01223241387” “postcode” : “cb580a” “address” : “Grafton Hotel 619 Newmarket Road Fen Ditton” “food” : “british” “area” : “east” Dialogue id : “SNG0689” Goal Database (restaurant) Dialogue turns Blue : Informable slot Yello-Green : Requestable slot name Orange : Requestable slot value Informable “food” : “british” “area” : “west” Requestable “phone” “address” “postcode” Book “people” : 6 … … Figure 1: A single-domain example in MultiWOZ dataset. DST via predicting the dialogue state, (2) POL via predicting the system action, (3) retrieving appropriate records from the external database for the dialogue state and the system action, and (4) NLG via predicting the system response. As a result, our neural model not only generates the system response just like end-to-end neural dialogue systems, but also generates dialogue states and system actions as intermediate outputs, improving the interpretability of the behavior of the dialogue system. In order to achieve this, we leverage the annotations of dialogue states and system actions provided in the corpus (e.g. MultiWOZ dataset (Budzianowski et al., 2018)) for training our system in a very natural way. Our model is evaluated using ConvLab (Lee et al., 2019b), a multi-domain end-to-end dialog system platform to support various aspects in the development and evaluation of dialogue systems, in terms of the automatic evaluation using the user simulator and the human evaluation using crowd workers. Particularly, in the human evaluation carried out as a part of the 8th dialogue systems technology challenge (DSTC8) (Kim et al., 2019), our system attained the success rate of 68.32%, the language understanding score of 4.149, and the response appropriateness score of 4.287, ranking at the 1st place in DSTC8. We also show that our model is competitive to other state-of-the-art models specialized for two sub-tasks in the dialogue management, i.e. Dialogue State Tracking and Dialogue-Context-to-Text Generation tasks, although our model was not particularly tuned for those sub-tasks. The main characteristics of our model can be summarized as follows: (1) it is trained to follow the traditional dialogue management pipeline, making the monolithic neural model more interpretable and easily integratable with external systems, while (2) it is trained in an end-to-end fashion with simple gradient descent, and (3) leverages GPT-2, a powerful pre-trained language model. The code is available through the GitHub code repository.1 2 End-to-end Multi-Domain Task-Completion Task Before we describe our approach, we briefly overview the end-to-end multi-domain taskcompletion task used in DSTC8, for which we developed our dialogue system. 2.1 The MultiWOZ Dataset The MultiWOZ dataset is a large-scale fully annotated corpus of natural human-human conversa1https://github.com/KAIST-AILab/ NeuralPipeline_DSTC8 585 Restaurantinform name : [restaurant-name] System Action DB Query Candidates after Query “name” : “frankie and bennys” “pricerange” : “expensive” “area” : “south” “food” : “Italian” … Database <usr> I ’d like to find an expensive place to dine that specifically serves Italian food . <sys> Okay . Would you like to go to the centre or south part of town ? <usr> I would like the south part of town please . Dialogue history : restaurant pricerange : expensive food : italian area : south Dialogue state System action Response Dialogue state Word decoder layer Transformer decoder blocks Dialogue history Dialogue state System action Response GPT-2 If Empty Query Results, ① ② ③ ④ Response : frankie and bennys meets your criteria. Would you like to book it ? Query results Replacement Response : [restaurant_name] meets your criteria. Would you like to book it ? ⑤ ⑥ Response : There’s no restaurant meets your criteria Candidates after Query No Results Restaurant -nooffer None-None System Action Empty Query Results case Normal case Figure 2: The overview of our end-to-end neural dialogue model. For the transformer, we use fine-tuned GPT-2. The dashed line represents the information to and from the DB query, which is invoked when the system action needs to fetch an actual value from the database. tions, where the user as a tourist converses with the system as a clerk across multiple domains. Each dialogue is rich in annotations such as ‘goal’, ‘metadata’, and ‘dialog act’ as well as user and system utterances. These annotations facilitate using machine learning to develop individual modules of a dialogue system (NLU, DST, POL, NLG, Wordlevel DST, Word-level POL), as well as an end-toend dialogue system. Figure 1 shows an example of a single-domain dialogue in the MultiWOZ dataset. Each dialogue consists of ‘Goal’, ‘Database’ and ‘Dialogue turns’. The goal is defined by the domain and the slots. The slots are divided into informable, requestable and book slots. Informable slots represent user constraints and Requestable slots hold additional information that the user wants to obtain. Book slots are used to reserve a place recommended by the system. 2.2 ConvLab For evaluating dialogue systems, DSTC8 used ConvLab (Lee et al., 2019b), an open-source platform that supports researchers to train and evaluate their own dialogue systems. ConvLab contains implementations of the state-of-the-art models of NLU, DST, POL, NLG (Kim et al., 2017; Lee et al., 2019b; Ramadan et al., 2018; Wu et al., 2019; Wen et al., 2015, 2017; Budzianowski et al., 2018) and an end-to-end neural model for dialogue systems (Lei et al., 2018; Madotto et al., 2018), which are readily reusable for building dialogue systems using various approaches. ConvLab also provides an agenda-based user simulator to easily interact with the target dialogue system, consisting of a multi-intent language understanding(MILU) (Lee et al., 2019b) for NLU, a rule-based policy, and a template-based NLG. For each dialogue, a goal is randomly generated that conforms with the goal schema of the MultiWOZ dataset. The user simulator then generates an agenda based on the goal. While interacting with 586 ⇒Dialogue State ⇒System Action <usr> I am looking for a place to stay that has cheap price range it should be in a type of hotel <sys> Okay , do you have a specific area you want to stay in ? “metadata”: {“hotel”: { “semi”: {“name”: “not mentioned”, “area”: “not mentioned”, “parking”: “not mentioned”, “pricerange”: “cheap”, “stars”: “not mentioned”, “internet”: “not mentioned”, “type”: “hotel”}} “dialog_act”: {“Hotel-Request”: [[“Area”, “?”]]} <usr> no, I just need to make sure it ’s cheap, oh , and I need parking <ds> <hotel> <name> <nm> <area> <nm> <park ing> <nm> <price range> ⋯ Word-level Input Representation <sa> <hotelrequest> <area> ? Delimiter of dialogue state Domain Delimiter of system action : Slot name-value pairs System action intent Figure 3: In the MultiWOZ dataset, the ‘metadata’ is treated as the dialogue state and the ‘dialogue act’ is treated as the system action. the target dialogue system, it recognizes the system dialogue act, decides the user dialogue act from the agenda stack, and generates the user response at each turn. When the system offers to book and the user accepts it, the system should notify an 8-digit reference number. The reference number is used to verify whether the booked place is fit on what the user informs. ConvLab also provides an automatic evaluator which assesses whether the target dialogue system (1) traces what the user informs (2) informs what the user requests, and (3) makes an appropriate booking using an external database based on the traced information. Although the user simulator and evaluator are highly sophisticated, it is not as perfect as human. Hence, the dialogue systems submitted to the DSTC8 were evaluated not only with the user simulator but also with human crowd-workers. 3 End-to-End Neural Pipeline for Goal-Oriented Dialogue System We now describe our end-to-end neural pipeline for the goal-oriented dialogue system based on GPT-2. Our system consists of (1) the GPT-2 model finetuned on the delexicalized version of MultiWOZ dataset (Section 3.2) and (2) the database query module. We take the pre-trained GPT-2 model and fine-tune it to follow the steps of the dialogue management pipeline. Figure 2 illustrates an overall architecture with a concrete example. The overview of the process followed by our model is as follows: 1. Predict the recent domain and the corresponding dialogue state conditioned on the dialogue history. 2. Predict the system action with delexicalized tokens conditioned on the dialogue history and dialogue state. 3. If the system action (e.g. ‘inform’, ‘book’) needs external information from the database, the query module2 retrieves the candidates and returns one of them. 4. Update the current system action when detecting Empty Query Results (Section 3.5). 5. Generate the system response with delexicalized tokens conditioned on dialogue history, 2ConvLab provides a DB query module returning candidates given domain and dialogue state. 587 = Token Embedding <usr> am … <sys> Okay … <usr> no … <ds> <hotel> <park ing> yes ... <sa> <HotelInform> <price> cheap … <sys> i found … <eos> <usr> am … <sys> Okay … <usr> no … <ds> <hotel> <park ing> yes … <sa> <HotelInform> <price> cheap … <sys> Okay , … <eos> <usr> <usr> <usr> <sys> <sys> <sys> <usr> <usr> <usr> <sys> <sys> <sys> <sys> <sys> <sys> <sys> <sys> <sys> <sys> <sys> <sys> <sys> <sys> <sys> + Speaker Embedding + Positional Embedding Dialogue History Dialogue State System Action System Response Figure 4: Input representation for fine-tuning GPT-2. dialogue state, and system action. 6. Update the delexicalized tokens in the system response with the query result. In Figure 2, the numbers wrapped with circle indicate the order of process. The red box shows how our system handles the case when the DB query does not return any record at all. 3.1 Input Representation In the MultiWOZ dataset, ‘metadata’ and ’dialog act’ correspond to the current dialogue state and the current system action, respectively (Figure 3). In order to use GPT-2, we need to convert the dialogue state and the system action to word tokens. Figure 3 shows an illustrative example of a single-turn of a dialogue and its representation of the dialogue state and system action. We introduce delimiter tokens <usr>, <sys>, <ds> and <sa> to signal the beginning of sequence representations of user utterance, system response, dialogue state, and system action. The domain and the slot names are also represented by additional special tokens, and <nm> and <dc> are special tokens that indicate ‘not mentioned’ and ‘don’t care’. The complete input representation for our model is illustrated in Figure 4, similar to Radford et al. (2019) and Wolf et al. (2019). The input embedding comprises of the token embedding, the speaker embedding, and the positional embedding. 3.2 Delexicalization Each dialogue in MultiWOZ dataset is generated based on the DB query results, and as such, the requestable slot values such as reference numbers and addresses (e.g. those colored in orange in Figure 1) are valid only for that particular dialogue instance. On the other hand, our model should be able to inform appropriate information depending on the dialogue context. To address this, we delexicalized all the values for requestable slots (reference number, name, postcode, phone number, address) as [DOMAIN SLOTNAME] (e.g. [hotel postcode] for hotel’s postcode) that appear in the corpus. Thus, our model learns to generate delexicalized system response, and delexicalized tokens are later string-replaced by the real information from the DB query using a small piece of post-processing code. 3.3 Training Objective In order to fine-tune GPT-2, we optimize the weighted sum of the objectives of language modeling (LM) and next-utterance classification (NC), following (Radford et al., 2018). For LM, we use the standard left-to-right LM objective (Bengio et al., 2003) as follows: LLM(w1, . . . , wn) = X i log P(wi|w1, . . . , wi−1) The LM objective calculates the likelihood of the next word-token from given the previous wordtokens. For NC, the model needs to distinguish the gold response (gold dialogue state+gold system action+gold system response) from a distractor (gold dialogue state+gold system action+fake system response), given the dialogue history. The distractor system responses were randomly sampled from the MultiWOZ dataset. The linear classifier takes the last hidden state of the GPT-2’s decoder block as input and computes the class probability by passing through the softmax layer. The cross-entropy loss between the class probability and the correct label was used for the NC objective, LNC. Thus, for the given word sequence W = (w1, . . . , wn), the total objective becomes a linear combination of LLM and LNC with hyper-parameters αLM and αNC: Ltotal(W) = αLMLLM(W) + αNCLNC(W) 588 Model Success Rate ↑ Return ↑ Turns ↓ Precision ↑ Recall ↑ F1 ↑ Book Rate ↑ Baseline 62.00% 28.22 8.18 0.70 0.83 0.74 84.38% Ours + greedy 78.60% 48.92 7.40 0.87 0.89 0.87 86.34% Ours + top-p (p=0.8) 75.40% 44.67 7.81 0.88 0.88 0.86 84.10% Ours + top-k (k=30) 74.80% 44.47 7.29 0.83 0.86 0.83 83.49% Table 1: Results of decoding strategies in the automatic evaluation, using the ConvLab evaluator. A baseline system provided by ConvLab consists of MILU (Lee et al., 2019b) as NLU module, rule-based DST and POL, and template-based NLG. Rank Team ID Success Rate ↑ Language Response Turns ↓ Understanding ↑ Appropriateness ↑ 1 OURS(504430) 68.32% 4.149 4.287 19.507 2 504429 65.81% 3.538 3.632 15.481 3 504563 65.09% 3.538 3.840 13.884 4 504651 64.10% 3.547 3.829 16.906 5 504641 62.91% 3.742 3.815 14.968 N/A Baseline 56.45% 3.097 3.556 17.543 Table 2: Overall results of the human evaluation carried out by DSTC8 organizers. Only the top five teams and the baseline results are compared. 3.4 Decoding Strategy When we generate the system response from the dialogue history, the final output is the probability distribution of word-tokens at each position. Using the distribution, there are many decoding methods for generating word-tokens, which have a significant impact on the quality of the output (Holtzman et al., 2020; Weston et al., 2018). The greedy decoding and the beam search are the most common approaches. However, since the greedy decoding only considers the token with the highest probability at each position, it does not necessary yield a system response with overall high probability. In addition, Holtzman et al. (2020) evidences that the beam search decoding is not appropriate for high-entropy natural language generation such as dialogues. Other sampling-based decoding methods, top-k sampling and top-p sampling have been shown to addressed the above problems quite effectively for dialogue tasks (Wolf et al., 2019; Budzianowski and Vuli´c, 2019). We evaluated the performance of our models with the decoding schemes mentioned above, and selected the best one via human evaluation. 3.5 Handling Empty Query Result As we mentioned before, GPT-2 invokes the query module to interact with the database. However, GPT-2 doesn’t know how many candidates satisfy the constraints a-priori. Therefore, there exist cases where no candidate happens to satisfy the constraints, which we refer to as Empty-Query-Result. In this case, the dialogue system should generate the system response corresponding to the intent Empty-Query-Result. Our system monitors the system action generated from GPT-2 and replace it by <EQR> if the database query returns an empty result, and feed this modified input to GPT-2 to generate the system response. This simple solution worked quite well in practice. 4 Related Work TransferTransfo (Wolf et al., 2018) was the first attempt to incorporate a large-scale pre-trained language model into a chit-chat dialogue system. Using GPT as a backbone, their fine-tuning approach ranked first in the automatic evaluation and second in the human evaluation in the ConvAI2 competition (Dinan et al., 2018). Our model is mainly inspired by this work, extending to goal-oriented dialogues using GPT-2. Parallel and independent to our work towards DSTC8 submission, Budzianowski and Vuli´c (2019) also demonstrated a neural model for goaloriented dialogue systems by fine-tuning GPT-2 on the MultiWOZ dataset. However, they only handle dialogue-context-to-text task, which outputs the system response given the dialogue history, the ground-truth dialogue state, and the database. In our case, no oracle information related to database 589 <sa> <restaurant -request> <area> ? <restaurantnooffer> <sa> <food> modern european 0.08 0.16 0.24 0.32 0.40 0.32 0.24 0.16 0.08 <food> modern european <pricerange> <nm> <name> <nm> <area> <nm> Attention Weights of system action Attention Weights of response restaurants <system> I ' m sorry , there are no modern european . Figure 5: Visualizing attention weights. (left) The model attends to the dialogue state <area> <nm> for generating system action <restaurant-request> <area>. (right) The model attends to the system action <restaurant-nooffer> for generating response ‘I’m sorry. There are no modern European restaurants’. and dialogue state is provided, and only the dialogue history was provided. Taking the dialogue history as an input, our model operates as a complete dialogue system that generates system responses by sequentially following the core steps in the dialogue management pipeline. 5 Experimental Settings 5.1 Training Details We developed our model using the open-source implementation of Wolf et al. (2018)3 and the GPT2-small (124M parameters) that consists of 12 transformer decoder blocks and pre-trained weights (Wolf et al., 2019)4. We tokenized each sentence into sub-word using GPT2Tokenizer4 (Sennrich et al., 2016). We fine-tuned the GPT-2 with batch size 2 for 4 epochs over the MultiWOZ training dataset. The maximum history size of each dialogue was set to 15. We used the Adam optimizer (Kingma and Ba, 2015) with β1 = 0.9, β2 = 0.999 and the learning late of 6.25e-5. The coefficients of the LM and the NC losses were set to 2.0 and 1.0, respectively. 5.2 Evaluation Metrics There were two evaluation criteria in the End-toEnd Multi-Domain Dialog System Task of the 3https://github.com/huggingface/ transfer-learning-conv-ai 4https://github.com/huggingface/ transformers Multi-Domain Task-Completion Track in DSTC8: • Automatic evaluation with user simulator: Success Rate, Book Rate, Return, Turns, Precision, Recall, F1 • Human evaluation with crowd-workers: Success Rate, Language Understanding Score, Response Appropriateness Score, Turns In measuring the success rate, the dialogue is considered as a success only if the requestable slots are correctly filled and book success if needed. Book success is achieved only if the reserved information fits into all informable slots, and is measured by the book rate as a sub-evaluation. Return is a reward signal obtained from the user simulator when the dialogue is complete. The return of each dialogue is computed as follows: Return = −Turns + 2 ∗max turn If task success, (−1) ∗max turn otherwise. The max turn indicates the maximum limit of turns in a conversation (e.g. 40). Precision, Recall, and F1 measure the accuracy of requestable slot filling. For the human evaluation, Language Understanding Score and Response Appropriateness Score were the metrics of how natural the response of the model is, with the 5 point scale. The human evaluation results reported here were carried out by the DSTC8 organizers. 590 6 Results 6.1 Automatic Evaluation Table 1 shows automatic evaluation results on various decoding strategies using the user simulator provided in ConvLab. Our proposed model with greedy decoding strategy achieved the success rate of 78.60%, the avg return of 48.92, the avg turns of 7.40, the book rate of 86.34%, the precision of 0.87, the recall of 0.89, and the F1 score of 0.87 in the automatic evaluation using 500 simulated dialogues. Our model outperformed the baseline system, but failed to perform best among submitted systems, mostly due to the incorrect intent recognition in the user simulator. We believe that this can be circumvented by further training our model using reinforcement learning, trained to avoid system responses that trigger intent recognition failure in the simulator. However, our main focus was to generate diverse system responses that looked natural to human evaluators. 6.2 Human Evaluation Table 2 shows the final ranking of the competition using human evaluation.5 Our proposed model with top-p sampling (p=0.8) strategy ranked in the first place with the success rate of 68.32%, the average turns of 19.507, the language understanding score of 4.149 and the response appropriateness score 4.287. Compared to the 2nd-ranked model, our model showed a 2.51% improvement in success rate. The performance gap was more significant in human language metrics, 0.365 points and 0.458 points higher than the 2nd-ranked model in the Language Understanding score and the Response Appropriateness score. 6.3 Attention Weights Figure 5 visualizes the attention weights of the transformer blocks in our model, demonstrating that our model appropriately attends to the word token generated from the previous module in the dialogue management pipeline, just like a pipelined dialogue system would do when generating the intermediate outputs. For example, if the user asks ‘I’m looking for modern European food’, our model generates dialogue state <area> <nm>, which means the area is not mentioned. Then we can see the attention weight on <area> <nm> in the dialogue state is relatively higher 5https://convlab.github.io/ Model Joint Acc. Slot Acc. GLAD 35.57 95.44 (Zhong et al., 2018) GCE 36.27 98.42 (Nouri and Hosseini-Asl, 2018) SUMBT 46.64 96.44 (Lee et al., 2019a) TRADE 48.62 96.92 (Wu et al., 2019) OURS + greedy 44.03 96.07 Table 3: Performance comparison with other state-ofthe-art models in Dialogue State Tracking benchmark of MultiWOZ dataset. Model Inform Success BLEU BASELINE 71.29 60.96 18.80 (Budzianowski et al., 2018) TOKENMOE 75.30 59.70 16.81 (Pei et al., 2019) HDSA 82.9 68.90 23.60 (Chen et al., 2019) STRUCTURED FUSION 82.70 72.10 16.34 (Mehri et al., 2019) LARL 82.78 79.20 12.80 (Zhao et al., 2019) OURS + greedy 77.00 69.20 6.01 Table 4: Performance comparison with other state-ofthe-art models in Dialogue-Context-to-Text Generation benchmark of MultiWOZ dataset. than other tokens when it generates system action <restaurant-request> <area>. As another example, if we change the system action as <restaurant-nooffer>, the model generates the system response ‘I’m sorry. There are no modern European restaurant’ and it attends on the token <restaurant-nooffer>. 6.4 MultiWOZ Benchmarks Performance As an ablation study, we test the modular performance of our model on two MultiWOZ benchmark tasks (Budzianowski et al., 2018): Dialogue State Tracking and Dialogue-Context-to-Text Generation. 6.4.1 Dialogue State Tracking Table 3 compares the dialogue state tracking accuracy of our model to those of other recent trackers in the literature. In this task, we measure the joint accuracy and slot accuracy of dialogue state tracking part of our model. Although our training objective involves other dialogue management tasks than dialogue state tracking, our model’s tracking perfor591 mance was very competitive to the state-of-the-art models. 6.4.2 Dialogue-Context-to-Text Generation Dialogue-Context-to-Text generation looks at the combined performance of the dialogue policy and the system response generation modules, measuring the quality of system response when the previous user utterance, the ground-truth dialogue state, and the ground-truth database query results are given. Our trained model can be straightforwardly adapted to perform this task by replacing the intermediate inputs with ground-truth values. Table 4 shows the Context-to-Text Generation benchmark performance compared to other recent models proposed in the literature. Again, our model was competitive to the state-of-the-art models except for the BLEU score. This is due to the fact that the system uses the large vocabulary of GPT-2, making system responses often containing diverse words that are not in the dataset. 7 Conclusion In this paper, we presented an end-to-end monolithic neural model for goal-oriented dialogues that learns to follow the core steps in the dialogue management pipeline. Since our model outputs all the intermediate results in the dialogue management pipeline, it is easy to integrate with external systems and to interpret why the system generates a particular response. The experimental results from human evaluation show evidence that our approach can provide very natural human-level interaction for goal-oriented dialogues, advancing the stateof-the-art in conversational AI agents. This also demonstrates the power of large-scale pre-trained language models to be adopted for building end-toend goal-oriented dialogue systems. Acknowledgements This work was supported by the National Research Foundation (NRF) of Korea (NRF2019R1A2C1087634) and the Ministry of Science and Information communication Technology (MSIT) of Korea (IITP No. 2020-0-00940, IITP 2019-0-00075-001 and IITP No. 2017-0-01779 XAI). References Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A Neural Probabilistic Language Model. Journal of Machine Learning Research. Paweł Budzianowski and Ivan Vuli´c. 2019. Hello, It’s GPT-2 – How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems. In Proceedings of the 3rd Workshop on Neural Generation and Translation. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I˜nigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. MultiWOZ - A Large-Scale Multi-Domain Wizard-of-Oz Dataset for Task-Oriented Dialogue Modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Wenhu Chen, Jianshu Chen, Pengda Qin, Xifeng Yan, and William Yang Wang. 2019. Semantically Conditioned Dialog Response Generation via Hierarchical Disentangled Self-Attention. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, et al. 2018. The Second Conversational Intelligence Challenge (ConvAI2). In The NeurIPS’18 Competition. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations. Seokhwan Kim, Michel Galley, Chulaka Gunasekara, Sungjin Lee, Adam Atkinson, Baolin Peng, Hannes Schulz, Jianfeng Gao, Jinchao Li, Mahmoud Adada, Minlie Huang, Luis Lastras, Jonathan K. Kummerfeld, Walter S. Lasecki, Chiori Hori, Anoop Cherian, Tim K. Marks, Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, and Raghav Gupta. 2019. The eighth dialog system technology challenge. In Third NeurIPS workshop on Conversational AI: “Today’s Practice and Tomorrow’s Potential”. Young-Bum Kim, Sungjin Lee, and Karl Stratos. 2017. OneNet: Joint Domain, Intent, Slot Prediction for Spoken Language Understanding. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In 3rd International Conference on Learning Representations. Hwaran Lee, Jinsik Lee, and Tae-Yoon Kim. 2019a. SUMBT: Slot-utterance matching for universal and scalable belief tracking. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 592 Sungjin Lee, Qi Zhu, Ryuichi Takanobu, Xiang Li, Yaoqin Zhang, Zheng Zhang, Jinchao Li, Baolin Peng, Xiujun Li, Minlie Huang, and Jianfeng Gao. 2019b. ConvLab: Multi-Domain End-to-End Dialog System Platform. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018. Sequicity: Simplifying Task-oriented Dialogue Systems with Single Sequence-to-Sequence Architectures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2Seq: Effectively Incorporating Knowledge Bases into End-to-End Task-Oriented Dialog Systems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Shikib Mehri, Tejas Srinivasan, and Maxine Eskenazi. 2019. Structured fusion networks for dialog. In 20th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Elnaz Nouri and Ehsan Hosseini-Asl. 2018. Toward scalable neural dialogue state tracking. In NeurIPS 2018, 2nd Conversational AI workshop. Jiahuan Pei, Pengjie Ren, and Maarten de Rijke. 2019. A modular task-oriented dialogue system using a neural mixture-of-experts. In WCIS: SIGIR 2019 Workshop on Conversational Interaction Systems. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. https://s3-us-west-2.amazonaws.com/openaiassets/research-covers/languageunsupervised/language understanding paper.pdf. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. https://www.techbooky.com/wpcontent/uploads/2019/02/Better-Language-Modelsand-Their-Implications.pdf. Osman Ramadan, Paweł Budzianowski, and Milica Gaˇsi´c. 2018. Large-Scale Multi-Domain Belief Tracking with Knowledge Sharing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Tsung-Hsien Wen, Milica Gaˇsi´c, Nikola Mrkˇsi´c, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Tsung-Hsien Wen, David Vandyke, Nikola Mrkˇsi´c, Milica Gaˇsi´c, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Jason Weston, Emily Dinan, and Alexander Miller. 2018. Retrieve and refine: Improved sequence generation models for dialogue. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI, Brussels, Belgium. Association for Computational Linguistics. Jason Williams, Antoine Raux, Deepak Ramachandran, and Alan Black. 2013. The Dialog State Tracking Challenge. In Proceedings of the SIGDIAL 2013 Conference. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s Transformers: State-of-the-art Natural Language Processing. arXiv preprint abs:1910.03771. Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2018. TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents. NeurIPS 2018 workshop on Conversational AI: “Today’s Practice and Tomorrow’s Potential”. Chien-Sheng Wu, Andrea Madotto, Ehsan HosseiniAsl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state generator for task-oriented dialogue systems. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Tiancheng Zhao, Kaige Xie, and Maxine Eskenazi. 2019. Rethinking action spaces for reinforcement learning in end-to-end dialog agents with latent variable models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Victor Zhong, Caiming Xiong, and Richard Socher. 2018. Global-locally self-attentive encoder for dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics.
2020
54
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6066–6080 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6066 Word-level Textual Adversarial Attacking as Combinatorial Optimization Yuan Zang1∗, Fanchao Qi1∗, Chenghao Yang2∗†, Zhiyuan Liu1‡, Meng Zhang3, Qun Liu3, Maosong Sun1 1Department of Computer Science and Technology, Tsinghua University Institute for Artificial Intelligence, Tsinghua University Beijing National Research Center for Information Science and Technology 2Columbia University 3Huawei Noah’s Ark Lab {zangy17,qfc17}@mails.tsinghua.edu.cn, [email protected] {liuzy,sms}@tsinghua.edu.cn, {zhangmeng92,qun.liu}@huawei.com Abstract Adversarial attacks are carried out to reveal the vulnerability of deep neural networks. Textual adversarial attacking is challenging because text is discrete and a small perturbation can bring significant change to the original input. Word-level attacking, which can be regarded as a combinatorial optimization problem, is a well-studied class of textual attack methods. However, existing word-level attack models are far from perfect, largely because unsuitable search space reduction methods and inefficient optimization algorithms are employed. In this paper, we propose a novel attack model, which incorporates the sememebased word substitution method and particle swarm optimization-based search algorithm to solve the two problems separately. We conduct exhaustive experiments to evaluate our attack model by attacking BiLSTM and BERT on three benchmark datasets. Experimental results demonstrate that our model consistently achieves much higher attack success rates and crafts more high-quality adversarial examples as compared to baseline methods. Also, further experiments show our model has higher transferability and can bring more robustness enhancement to victim models by adversarial training. All the code and data of this paper can be obtained on https://github.com/ thunlp/SememePSO-Attack. 1 Introduction Adversarial attacks use adversarial examples (Szegedy et al., 2014; Goodfellow et al., 2015), which are maliciously crafted by perturbing the original input, to fool the deep neural networks ∗Indicates equal contribution. Yuan developed the method, designed and conducted most experiments; Fanchao formalized the task, designed some experiments and wrote the paper; Chenghao made the original research proposal, performed human evaluation and conducted some experiments. †Work done during internship at Tsinghua University ‡Corresponding author FondOf produce shows Original Input Substitute Words Sememes movie cinema picture love like this film I Search Space Adversarial Example film this like I I this enjoy Figure 1: An example showing search space reduction with sememe-based word substitution and adversarial example search in word-level adversarial attacks. (DNNs). Extensive studies have demonstrated that DNNs are vulnerable to adversarial attacks, e.g., minor modification to highly poisonous phrases can easily deceive Google’s toxic comment detection systems (Hosseini et al., 2017). From another perspective, adversarial attacks are also used to improve robustness and interpretability of DNNs (Wallace et al., 2019). In the field of natural language processing (NLP) which widely employs DNNs, practical systems such as spam filtering (Stringhini et al., 2010) and malware detection (Kolter and Maloof, 2006) have been broadly used, but at the same time the concerns about their security are growing. Therefore, the research on textual adversarial attacks becomes increasingly important. Textual adversarial attacking is challenging. Different from images, a truly imperceptible perturbation on text is almost impossible because of its discrete nature. Even a slightest character-level perturbation can either (1) change the meaning and, worse still, the true label of the original input, or (2) break its grammaticality and naturality. Unfortunately, the change of true label will make the adversarial attack invalid. For example, supposing an adversary changes “she” to “he” in an input 6067 sentence to attack a gender identification model, although the victim model alters its prediction result, this is not a valid attack. And the adversarial examples with broken grammaticality and naturality (i.e., poor quality) can be easily defended (Pruthi et al., 2019). Various textual adversarial attack models have been proposed (Wang et al., 2019a), ranging from character-level flipping (Ebrahimi et al., 2018) to sentence-level paraphrasing (Iyyer et al., 2018). Among them, word-level attack models, mostly word substitution-based models, perform comparatively well on both attack efficiency and adversarial example quality (Wang et al., 2019b). Word-level adversarial attacking is actually a problem of combinatorial optimization (Wolsey and Nemhauser, 1999), as its goal is to craft adversarial examples which can successfully fool the victim model using a limited vocabulary. In this paper, as shown in Figure 1, we break this combinatorial optimization problem down into two steps including (1) reducing search space and (2) searching for adversarial examples. The first step is aimed at excluding invalid or low-quality potential adversarial examples and retaining the valid ones with good grammaticality and naturality. The most common manner is to pick some candidate substitutes for each word in the original input and use their combinations as the reduced discrete search space. However, existing attack models either disregard this step (Papernot et al., 2016) or adopt unsatisfactory substitution methods that do not perform well in the trade-off between quality and quantity of the retained adversarial examples (Alzantot et al., 2018; Ren et al., 2019). The second step is supposed to find adversarial examples that can successfully fool the victim model in the reduced search space. Previous studies have explored diverse search algorithms including gradient descent (Papernot et al., 2016), genetic algorithm (Alzantot et al., 2018) and greedy algorithm (Ren et al., 2019). Some of them like gradient descent only work in the white-box setting where full knowledge of the victim model is required. In real situations, however, we usually have no access to the internal structures of victim models. As for the other black-box algorithms, they are not efficient and effective enough in searching for adversarial examples. These problems negatively affect the overall attack performance of existing word-level adversarial attacking. To solve the problems, we propose a novel black-box word-level adversarial attack model, which reforms both the two steps. In the first step, we design a word substitution method based on sememes, the minimum semantic units, which can retain more potential valid adversarial examples with high quality. In the second step, we present a search algorithm based on particle swarm optimization (Eberhart and Kennedy, 1995), which is very efficient and performs better in finding adversarial examples. We conduct exhaustive experiments to evaluate our model. Experimental results show that, compared with baseline models, our model not only achieves the highest attack success rate (e.g., 100% when attacking BiLSTM on IMDB) but also possesses the best adversarial example quality and comparable attack validity. We also conduct decomposition analyses to manifest the advantages of the two parts of our model separately. Finally, we demonstrate that our model has the highest transferability and can bring the most robustness improvement to victim models by adversarial training. 2 Background In this section, we first briefly introduce sememes, and then we give an overview of the classical particle swarm optimization algorithm. 2.1 Sememes In linguistics, a sememe is defined as the minimum semantic unit of human languages (Bloomfield, 1926). The meaning of a word can be represented by the composition of its sememes. In the field of NLP, sememe knowledge bases are built to utilize sememes in practical applications, where sememes are generally regarded as semantic labels of words (as shown in Figure 1). HowNet (Dong and Dong, 2006) is the most wellknown one. It annotates over one hundred thousand English and Chinese words with a predefined sets of about 2,000 sememes. Its sememe annotations are sense-level, i.e., each sense of a (polysemous) word is annotated with sememes separately. With the help of HowNet, sememes have been successfully applied to many NLP tasks including word representation learning (Niu et al., 2017), sentiment analysis (Fu et al., 2013), semantic composition (Qi et al., 2019), sequence modeling (Qin et al., 2019), reverse dictionary (Zhang et al., 2019b), etc. 6068 2.2 Particle Swarm Optimization Inspired by the social behaviors like bird flocking, particle swarm optimization (PSO) is a kind of metaheuristic population-based evolutionary computation paradigms (Eberhart and Kennedy, 1995). It has been proved effective in solving the optimization problems such as image classification (Omran et al., 2004), part-of-speech tagging (Silva et al., 2012) and text clustering (Cagnina et al., 2014). Empirical studies have proven it is more efficient than some other optimization algorithms like the genetic algorithm (Hassan et al., 2005). PSO exploits a population of interacting individuals to iteratively search for the optimal solution in the specific space. The population is called a swarm and the individuals are called particles. Each particle has a position in the search space and moves with an adaptable velocity. Formally, when searching in a D-dimensional continuous space S ⊆RD with a swarm containing N particles, the position and velocity of each particle can be represented by xn ∈S and vn ∈RD respectively, n ∈{1, · · · , N}. Next we describe the PSO algorithm step by step. (1) Initialize. At the very beginning, each particle is randomly initialized with a position xn in the search space and a velocity vn. Each dimension of the initial velocity vn d ∈[−Vmax, Vmax], d ∈{1, · · · , D}. (2) Record. Each position in the search space corresponds to an optimization score. The position a particle has reached with the highest optimization score is recorded as its individual best position. The best position among the individual best positions of all the particles is recorded as the global best position. (3) Terminate. If current global best position has achieved the desired optimization score, the algorithm terminates and outputs the global best position as the search result. (4) Update. Otherwise, the velocity and position of each particle are updated according to its current position and individual best position together with the global best position. The updating formulae are vn d = ωvn d + c1 × r1 × (pn d −xn d) + c2 × r2 × (pg d −xn d), xn d = xn d + vn d , (1) where ω is the inertia weight, pn d and pg d are the dth dimensions of the n-th particle’s individual best position and the global best position respectively, c1 and c2 are acceleration coefficients which are positive constants and control how fast the particle moves towards its individual best position and the global best position, and r1 and r2 are random coefficients. After updating, the algorithm goes back to the Record step. 3 Methodology In this section, we detail our word-level adversarial attack model. It incorporates two parts, namely the sememe-based word substitution method and PSO-based adversarial example search algorithm. 3.1 Sememe-based Word Substitution Method The sememes of a word are supposed to accurately depict the meaning of the word (Dong and Dong, 2006). Therefore, the words with the same sememe annotations should have the same meanings, and they can serve as the substitutes for each other. Compared with other word substitution methods, mostly including word embedding-based (Sato et al., 2018), language model-based (Zhang et al., 2019a) and synonym-based methods (Samanta and Mehta, 2017; Ren et al., 2019), the sememe-based word substitution method can achieve a better trade-off between quality and quantity of substitute words. For one thing, although the word embedding and language model-based substitution methods can find as many substitute words as we want simply by relaxing the restrictions on embedding distance and language model prediction score, they inevitably introduce many inappropriate and low-quality substitutes, such as antonyms and semantically related but not similar words, into adversarial examples which might break the semantics, grammaticality and naturality of original input. In contrast, the sememe-based and, of course, the synonym-based substitution methods does not have this problem. For another, compared with the synonym-based method, the sememe-based method can find more substitute words and, in turn, retain more potential adversarial examples, because HowNet annotates sememes for all kinds of words. The synonymbased method, however, depends on thesauri like WordNet (Miller, 1995), which provide no synonyms for many words like proper nouns and the number of a word’s synonyms is very limited. An empirical comparison of different word substitution methods is given in Section 4.6. 6069 In our sememe-based word substitution method, to preserve grammaticality, we only substitute content words1 and restrict the substitutes to having the same part-of-speech tags as the original words. Considering polysemy, a word w can be substituted by another word w∗only if one of w’s senses has the same sememe annotations as one of w∗’s senses. When making substitutions, we conduct lemmatization to enable more substitutions and delemmatization to avoid introducing grammatical mistakes. 3.2 PSO-based Adversarial Example Search Algorithm Before presenting our algorithm, we first explain what the concepts in the original PSO algorithm correspond to in the adversarial example search problem. Different from original PSO, the search space of word-level adversarial example search is discrete. A position in the search space corresponds to a sentence (or an adversarial example), and each dimension of a position corresponds to a word. Formally, xn = wn 1 · · · wn d · · · wn D, wn d ∈V(wo d), where D is the length (word number) of the original input, wo d is the d-th word in the original input, and V(wo d) is composed of wo d and its substitutes. The optimization score of a position is the target label’s prediction probability given by the victim model, where the target label is the desired classification result for an adversarial attack. Taking a binary classification task as an example, if the true label of the original input is “positive”, the target label is “negative”, and vice versa. In addition, a particle’s velocity now relates to the position change probability, i.e., vn d determines how probable wn d is substituted by another word. Next we describe our algorithm step by step. First, for the Initialize step, since we expect the adversarial examples to differ from the original input as little as possible, we do not make random initialization. Instead, we randomly substitute one word of the original input to determine the initial position of a particle. This operation is actually the mutation of genetic algorithm, which has also been employed in some studies on discrete PSO (Higashi and Iba, 2003). We repeat mutation N times to initialize the positions of N particles. Each dimension of each particle’s velocity is randomly 1Content words are the words that carry meanings and consist mostly of nouns, verbs, adjectives and adverbs. initialized between −Vmax and Vmax. For the Record step, our algorithm keeps the same as the original PSO algorithm. For the Terminate step, the termination condition is the victim model predicts the target label for any of current adversarial examples. For the Update step, considering the discreteness of search space, we follow Kennedy and Eberhart (1997) to adapt the updating formula of velocity to vn d = ωvn d + (1 −ω) × [I(pn d, xn d) + I(pg d, xn d)], (2) where ω is still the inertia weight, and I(a, b) is defined as I(a, b) = ( 1, a = b, −1, a ̸= b. (3) Following Shi and Eberhart (1998), we let the inertia weight decrease with the increase of numbers of iteration times, aiming to make the particles highly dynamic to explore more positions in the early stage and gather around the best positions quickly in the final stage. Specifically, ω = (ωmax −ωmin) × T −t T + ωmin, (4) where 0 < ωmin < ωmax < 1, and T and t are the maximum and current numbers of iteration times. The updating of positions also needs to be adjusted to the discrete search space. Inspired by Kennedy and Eberhart (1997), instead of making addition, we adopt a probabilistic method to update the position of a particle to the best positions. We design two-step position updating. In the first step, a new movement probability Pi is introduced, with which a particle determines whether it moves to its individual best position as a whole. Once a particle decides to move, the change of each dimension of its position depends on the same dimension of its velocity, specifically with the probability of sigmoid(vn d ). No matter whether a particle has moved towards its individual best position or not, it would be processed in the second step. In the second step, each particle determines whether to move to the global best position with another movement probability Pg. And the change of each position dimension also relies on sigmoid(vn d ). Pi and Pg vary with iteration to enhance search efficiency by adjusting the balance between local and global search, i.e., encouraging particles to explore more 6070 Dataset Task #Class Avg. #W Train Dev Test BiLSTM %ACC BERT %ACC IMDB Sentiment Analysis 2 234 25000 0 25000 89.10 90.76 SST-2 Sentiment Analysis 2 17 6920 872 1821 83.75 90.28 SNLI NLI 3 8 550152 10000 10000 84.43 89.58 Table 1: Details of datasets and their accuracy results of victim models. “#Class” means the number of classifications. “Avg. #W” signifies the average sentence length (number of words). “Train”, “Val” and “Test” denote the instance numbers of the training, validation and test sets respectively. “BiLSTM %ACC” and “BERT %ACC” means the classification accuracy of BiLSTM and BERT. space around their individual best positions in the early stage and search for better position around the global best position in the final stage. Formally, Pi = Pmax −t T × (Pmax −Pmin), Pg = Pmin + t T × (Pmax −Pmin), (5) where 0 < Pmin < Pmax < 1. Besides, to enhance the search in unexplored space, we apply mutation to each particle after the update step. To avoid excessive modification, mutation is conducted with the probability Pm(xn) = min  0, 1 −kE(xn, xo) D  , (6) where k is a positive constant, xo represents the original input, and E measures the word-level edit distance (number of different words between two sentences). E(xn,xo) D is defined as the modification rate of an adversarial example. After mutation, the algorithm returns to the Record step. 4 Experiments In this section, we conduct comprehensive experiments to evaluate our attack model on the tasks of sentiment analysis and natural language inference. 4.1 Datasets and Victim Models For sentiment analysis, we choose two benchmark datasets including IMDB (Maas et al., 2011) and SST-2 (Socher et al., 2013). Both of them are binary sentiment classification datasets. But the average sentence length of SST-2 (17 words) is much shorter than that of IMDB (234 words), which renders attacks on SST-2 more challenging. For natural language inference (NLI), we use the popular Stanford Natural Language Inference (SNLI) dataset (Bowman et al., 2015). Each instance in SNLI comprises a premise-hypothesis sentence pair and is labelled one of three relations including entailment, contradiction and neutral. As for victim models, we choose two widely used universal sentence encoding models, namely bidirectional LSTM (BiLSTM) with max pooling (Conneau et al., 2017) and BERTBASE (BERT) (Devlin et al., 2019). For BiLSTM, its hidden states are 128-dimensional, and it uses 300-dimensional pre-trained GloVe (Pennington et al., 2014) word embeddings. Details of the datasets and the classification accuracy results of the victim models are listed in Table 1. 4.2 Baseline Methods We select two recent open-source word-level adversarial attack models as the baselines, which are typical and involve different search space reduction methods (step 1) and search algorithms (step 2). The first baseline method (Alzantot et al., 2018) uses the combination of restrictions on word embedding distance and language model prediction score to reduce search space. As for search algorithm, it adopts genetic algorithm, another popular metaheuristic population-based evolutionary algorithm. We use “Embedding/LM+Genetic” to denote this baseline method. The second baseline (Ren et al., 2019) chooses synonyms from WordNet (Miller, 1995) as substitutes and designs a saliency-based greedy algorithm as the search algorithm. We call this method “Synonym+Greedy”. This baseline model is very similar to another attack model TextFooler (Jin et al., 2019), which has extra semantic similarity checking when searching adversarial examples. But we find the former performs better in almost all experiments, and thus we only select the former as a baseline for comparison. In addition, to conduct decomposition analyses of different methods in the two steps separately, we combine different search space reduction methods (Embedding/LM, Synonym and our sememe-based substitution method (Sememe)), and search algorithms (Genetic, Greedy and our PSO). 6071 Metrics Evaluation Method Better? Success Rate Auto Higher Validity Human (Valid Attack Rate) Higher Modification Rate Auto Lower Grammaticality Auto (Error Increase Rate) Lower Fluency Auto (Perplexity) Lower Naturality Human (Naturality Score) Higher Table 2: Details of evaluation metrics. “Auto” and “Human” represent automatic and human evaluations respectively. “Higher” and “Lower” mean the higher/lower the metric, the better a model performs. 4.3 Experimental Settings For our PSO, Vmax is set to 1, ωmax and ωmin are set to 0.8 and 0.2, Pmax and Pmin are also set to 0.8 and 0.2, and k in Equation (6) is set to 2. All these hyper-parameters have been tuned on the validation set. For the baselines, we use their recommended hyper-parameter settings. For the two population-based search algorithms Genetic and PSO, we set the maximum number of iteration times (T in Section 3.2) to 20 and the population size (N in Section 3.2) to 60, which are the same as Alzantot et al. (2018). 4.4 Evaluation Metrics To improve evaluation efficiency, we randomly sample 1, 000 correctly classified instances from the test sets of the three datasets as the original input to be perturbed. For SNLI, only the hypotheses are perturbed. Following Alzantot et al. (2018), we restrict the length of the original input to 10-100, exclude the out-of-vocabulary words from the substitute sets, and discard the adversarial examples with modification rates higher than 25%. We evaluate the performance of attack models including their attack success rates, attack validity and the quality of adversarial examples. The details of our evaluation metrics are listed in Table 2. (1) The attack success rate is defined as the percentage of the attacks which craft an adversarial example to make the victim model predict the target label. (2) The attack validity is measured by the percentage of valid attacks to successful attacks, where the adversarial examples crafted by valid attacks have the same true labels as the original input. (3) For the quality of adversarial examples, we divide it into four parts including modification rate, grammaticality, fluency and naturality. Grammaticality is measured by the increase rate of grammatical error numbers of adversarial examples compared with the original input, where we use LanguageTool2 to obtain the grammatical error number of a sentence. We utilize the language model perplexity (PPL) to measure the fluency with the help of GPT-2 (Radford et al., 2019). The naturality reflects whether an adversarial example is natural and indistinguishable from human-written text. We evaluate attack validity and adversarial example naturality only on SST-2 by human evaluation with the help of Amazon Mechanical Turk3. We randomly sample 200 adversarial examples, and ask the annotators to make a binary sentiment classification and give a naturality score (1, 2 or 3, higher better) for each adversarial example and original input. More annotation details are given in Appendix A. 4.5 Attack Performance Attack Success Rate The attack success rate results of all the models are listed in Table 3. We observe that our attack model (Sememe+PSO) achieves the highest attack success rates on all the three datasets (especially the harder SST2 and SNLI) and two victim models, proving the superiority of our model over baselines. It attacks BiLSTM/BERT on IMDB with a notably 100.00%/98.70% success rate, which clearly demonstrates the vulnerability of DNNs. By comparing three word substitution methods (search space reduction methods) and three search algorithms, we find Sememe and PSO consistently outperform their counterparts. Further decomposition analyses are given in a later section. Validity and Adversarial Example Quality We evaluate the attack validity and adversarial example quality of our model together with the two baseline methods (Embedding/LM+Genetic and Synonym+Greedy). The results of automatic and human evaluations are displayed in Table 4 and 5 respectively.4 Note that the human evaluations including attack validity and adversarial example naturality are conducted on SST-2 only. We find that in terms of automatic evaluations of adversarial example quality, including modification rate, grammaticality and fluency, our model consistently outperforms the two baselines on whichever victim model and dataset. As for attack validity and adver2https://www.languagetool.org 3https://www.mturk.com 4Automatic evaluation results of adversarial example quality of all the combination models are shown in Appendix B. 6072 Word Substitution Method Search Algorithm BiLSTM BERT IMDB SST-2 SNLI IMDB SST-2 SNLI Embedding/LM Genetic 86.90 67.70 44.40 87.50 66.20 44.30 Greedy 80.90 69.00 47.70 62.50 56.20 42.40 PSO 96.90 78.50 50.90 93.60 74.40 53.10 Synonym Genetic 95.50 73.00 51.40 92.90 78.40 56.00 Greedy 87.20 73.30 57.70 73.00 64.60 52.70 PSO 98.70 79.20 61.80 96.20 80.90 62.60 Sememe Genetic 96.90 78.50 50.90 93.60 74.40 53.10 Greedy 95.20 87.70 70.40 80.50 74.80 66.30 PSO 100.00 93.80 73.40 98.70 91.20 78.90 Table 3: The attack success rates (%) of different attack models. Victim Model Attack Model IMDB SST-2 SNLI %M %I PPL %M %I PPL %M %I PPL BiLSTM Embedding/LM+Genetic 9.76 5.49 124.20 12.03 7.08 319.98 13.31 14.12 235.20 Synonym+Greedy 6.47 4.49 115.31 10.25 4.65 317.27 12.32 21.37 311.04 Sememe+PSO 3.71 1.44 88.98 9.06 3.17 276.53 11.72 11.08 222.40 BERT Embedding/LM+Genetic 7.41 4.22 106.12 10.41 5.09 314.22 13.04 15.09 225.92 Synonym+Greedy 4.49 4.48 98.60 8.51 4.11 316.30 11.60 11.65 285.00 Sememe+PSO 3.69 1.57 90.74 8.24 2.03 289.94 11.72 10.14 223.22 Table 4: Automatic evaluation results of adversarial example quality. “%M”, “%I” and “PPL” indicate the modification rate, grammatical error increase rate and language model perplexity respectively. Victim Attack Model %Valid NatScore N/A Original Input 90.0 2.30 BiLSTM Embedding/LM+Genetic 65.5 2.205 Synonym+Greedy 72.0 2.190 Sememe+PSO 70.5 2.210 BERT Embedding/LM+Genetic 74.5 2.165 Synonym+Greedy 66.5 2.165 Sememe+PSO 72.0 2.180 Table 5: Human evaluation results of attack validity and adversarial example naturality on SST-2, where the second row additionally lists the evaluation results of original input. “%Valid” refers to the percentage of valid attacks. “NatScore” is the average naturality score of adversarial examples. sarial example naturality, our Sememe+PSO model obtains a slightly higher overall performance than the two baselines. But its adversarial examples are still inferior to original human-authored input, especially in terms of validity (label consistency). We conduct Student’s t-tests to further measure the difference between the human evaluation results of different models, where the statistical significance threshold of p-value is set to 0.05. We find that neither of the differences of attack validity and adversarial example naturality between different models are significant. In addition, the adversarial examples of any attack model have significantly worse label consistency (validity) than the original input, but possesses similar naturality. More details of statistical significance test are given in Appendix D. For Embedding/LM, relaxing the restrictions on embedding distance and language model prediction score can improve its attack success rate but sacrifices attack validity. To make a specific comparison, we adjust the hyper-parameters of Embedding/LM+Genetic5 to increase its attack success rates to 96.90%, 90.30%, 58.00%, 93.50%, 83.50% and 62.90% respectively on attacking the two victim models on the three datasets (in the same order as Table 3). Nonetheless, its attack validity rates against BiLSTM and BERT on SST-2 dramatically fall to 59.5% and 56.5%. In contrast, ours are 70.5% and 72.0%, and their differences are significant according to the results of significance tests in Appendix D. 4.6 Decomposition Analyses In this section, we conduct detailed decomposition analyses of different word substitution methods (search space reduction methods) and different search algorithms, aiming to further demonstrate the advantages of our sememe-based word substitution method and PSO-based search algorithm. 5The detailed hyper-parameter settings are given in Appendix C. 6073 Word Substitution Method IMDB SST-2 SNLI Embedding/LM 3.44 3.27 3.42 Synonym 3.55 3.08 3.14 Sememe 13.92 10.97 12.87 Table 6: The average number of substitutes provided by different word substitution methods. She breaks the pie dish and screams out that she is not handicapped. Embedding/LM Synonym Sememe tart, pizza, apple, shoemaker, cake cheesecake None cheese, popcorn, ham, cream, break, cake, pizza, chocolate, and 55 more Table 7: A real case showing the substitutes found by three word substitution methods, where the original word is colored green and appropriate substitutes are colored red. 1 2 3 4 5 10 20 30 50 Maximum Number of Iteration Times 25% 50% 75% 100% Attack Success Rate Sememe+PSO Synonym+PSO Sememe+Genetic Synonym+Genetic Figure 2: Attack success rates of different models with different maximum numbers of iteration times. The xcoordinate is in log-2 scale. Word Substitution Method Table 6 lists the average number of substitutes provided by different word substitution methods on the three datasets. It shows Sememe can find much more substitutes than the other two counterparts, which explains the high attack success rates of the models incorporating Sememe. Besides, we give a real case from SST-2 in Table 7 which lists substitutes found by the three methods. We observe that Embedding/LM find many improper substitutes, Synonym cannot find any substitute because the original word “pie” has no synonyms in WordNet, and only Sememe finds many appropriate substitutes. Search Algorithm We compare the two population-based search algorithms Genetic and PSO by changing two important hyper-parameters, namely the maximum number of iteration times T and the population size N. The results of attack success rate are shown in Figure 2 and 3. From the two figures, we find our PSO outperforms Genetic 2 3 4 5 10 20 30 40 60 100 Population Size 75% 80% 85% 90% 95% 100% Attack Success Rate Sememe+PSO Synonym+PSO Sememe+Genetic Synonym+Genetic Figure 3: Attack success rates of different models with population sizes. The x-coordinate is in log-2 scale. Transfer Attack Model IMDB SST-2 SNLI BiLSTM Embedding/LM+Genetic 81.93 70.61 61.26 ⇓ Synonym+Greedy 77.29 64.94 65.34 BERT Sememe+PSO 75.80 64.71 59.54 BERT Embedding/LM+Genetic 86.63 65.71 49.66 ⇓ Synonym+Greedy 81.64 58.67 45.16 BiLSTM Sememe+PSO 78.42 58.11 46.89 Table 8: The classification accuracy of transferred adversarial examples on the three datasets. Lower accuracy reflects higher transferability. consistently, especially in the setting with severe restrictions on maximum number of iteration times and population size, which highlights the efficiency of PSO. 4.7 Transferability The transferability of adversarial examples reflects whether an attack model can attack a DNN model without any access to it (Kurakin et al., 2016). It has been widely used as an important evaluation metric in adversarial attacks. We evaluate the transferability of adversarial examples by using BiLSTM to classify the adversarial examples crafted for attacking BERT, and vice versa. Table 8 shows the classification accuracy results of transferred adversarial examples. Note that lower accuracy signifies higher transferability. The lower the accuracy is, the higher the transferability is. We find compared with the two baselines, our Sememe+PSO crafts adversarial examples with overall higher transferability. 4.8 Adversarial Training Adversarial training is proposed to improve the robustness of victim models by adding adversarial examples to the training set (Goodfellow et al., 2015). In this experiment, for each attack model, 6074 we craft 692 adversarial examples (10% of the original training set size) by using it to attack BiLSTM on the training set of SST-2. Then we add the adversarial examples to the training set and retrain a BiLSTM. We re-evaluate its robustness by calculating the attack success rates of different attack models. Table 9 lists the results of adversarial training. Note larger attack success rate decrease signifies greater robustness improvement. We find that adversarial training can improve the robustness of victim models indeed, and our Sememe+PSO model brings greater robustness improvement than the two baselines, even when the attack models are exactly themselves.6 From the perspective of attacking, our Sememe+PSO model is still more threatening than others even under the defense of adversarial training. We also manually select 692 valid adversarial examples generated by Sememe+PSO to conduct adversarial training, which leads to even greater robustness improvement (last column of Table 9). The results show that adversarial example validity has big influence on adversarial training effect. 5 Related Work Existing textual adversarial attack models can be classified into three categories according to the perturbation levels of their adversarial examples. Sentence-level attacks include adding distracting sentences (Jia and Liang, 2017), paraphrasing (Iyyer et al., 2018; Ribeiro et al., 2018) and performing perturbations in the continuous latent semantic space (Zhao et al., 2018). Adversarial examples crafted by these methods usually have profoundly different forms from original input and their validity are not guaranteed. Character-level attacks are mainly random character manipulations including swap, substitution, deletion, insertion and repeating (Belinkov and Bisk, 2018; Gao et al., 2018; Hosseini et al., 2017). In addition, gradient-based character substitution methods have also been explored, with the help of one-hot character embeddings (Ebrahimi et al., 2018) or visual character embeddings (Eger et al., 2019). Although character-level attacks can achieve high success rates, they break the grammaticality and naturality of original input and can be easily defended (Pruthi et al., 2019). 6For instance, using Embedding/LM+Genetic in adversarial training to defend its attack declines the attack success rate by 2.60% while using our Sememe+PSO model declines by 3.53%. Att \Adv.T None E/L+G Syn+G Sem+P Sem+P* E/L+G 67.70 -2.60 -0.60 -3.53 -5.10 Syn+G 73.30 -2.67 -3.50 -3.13 -3.53 Sem+P 93.80 -1.07 0.03 -2.93 -4.33 Table 9: The attack success rates of different attack models when attacking BiLSTM on SST-2 and their decrements brought by adversarial training. “Att” and “Adv.T” denote “Attack Model” and “Adversarial Training”. E/L+G, Syn+G and Sem+P represent Embedding/LM+Genetic, Synonym+Greedy and our Sememe+PSO, respectively. “Sem+P*” denotes only using the valid adversarial examples generated by Sememe+PSO in adversarial training. As for word-level attacks, following our twostep modeling, their adversarial example space reduction methods (step 1) involve using word embeddings (Sato et al., 2018) or language model (Zhang et al., 2019a) to filter words, selecting synonyms as substitutes (Samanta and Mehta, 2017; Ren et al., 2019; Jin et al., 2019), and their combinations (Alzantot et al., 2018; Glockner et al., 2018). The search algorithms (step 2) include gradient descent (Papernot et al., 2016; Sato et al., 2018; Gong et al., 2018), genetic algorithm (Alzantot et al., 2018), Metropolis-Hastings sampling (Zhang et al., 2019a), saliency-based greedy algorithm (Liang et al., 2018; Ren et al., 2019; Jin et al., 2019). In comparison, our model adopts new methods in both steps which are more powerful. 6 Conclusion and Future Work In this paper, we propose a novel word-level attack model comprising the sememe-based word substitution method and particle swarm optimization-based search algorithm. We conduct extensive experiments to demonstrate the superiority of our model in terms of attack success rate, adversarial example quality, transferability and robustness improvement to victim models by adversarial training. In the future, we will try to increase the robustness gains of adversarial training and consider utilizing sememes in adversarial defense model. Acknowledgments This work is supported by the National Key Research and Development Program of China (No. 2018YFB1004503) and the National Natural Science Foundation of China (NSFC No. 61732008, 61772302). We also thank the anonymous reviewers for their valuable comments and suggestions. 6075 References Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the EMNLP. Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In Proceedings of ICLR. Leonard Bloomfield. 1926. A set of postulates for the science of language. Language, 2(3). Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of EMNLP. Leticia Cagnina, Marcelo Errecalde, Diego Ingaramo, and Paolo Rosso. 2014. An efficient particle swarm optimization approach to cluster short texts. Information Sciences, 265:36–49. Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of EMNLP. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT. Zhendong Dong and Qiang Dong. 2006. HowNet and the computation of meaning. World Scientific. Russell Eberhart and James Kennedy. 1995. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. Hotflip: White-box adversarial examples for text classification. In Proceedings of ACL. Steffen Eger, G¨ozde G¨ul S¸ahin, Andreas R¨uckl´e, JiUng Lee, Claudia Schulz, Mohsen Mesgar, Krishnkant Swarnkar, Edwin Simpson, and Iryna Gurevych. 2019. Text processing like humans do: Visually attacking and shielding NLP systems. In Proceedings of NAACL-HLT. Xianghua Fu, Guo Liu, Yanyan Guo, and Zhiqiang Wang. 2013. Multi-aspect sentiment analysis for chinese online social reviews based on topic modeling and hownet lexicon. Knowledge-Based Systems, 37:186–195. Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In Proceedings of IEEE Security and Privacy Workshops. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking nli systems with sentences that require simple lexical inferences. In Proceedings of ACL. Zhitao Gong, Wenlu Wang, Bo Li, Dawn Song, and Wei-Shinn Ku. 2018. Adversarial texts with gradient methods. arXiv preprint arXiv:1801.07175. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In Proceedings of ICLR. Rania Hassan, Babak Cohanim, Olivier De Weck, and Gerhard Venter. 2005. A comparison of particle swarm optimization and the genetic algorithm. In 46th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference. Natsuki Higashi and Hitoshi Iba. 2003. Particle swarm optimization with gaussian mutation. In Proceedings of the 2003 IEEE Swarm Intelligence Symposium. Hossein Hosseini, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017. Deceiving google’s perspective api built for detecting toxic comments. arXiv preprint arXiv:1702.08138. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of the NAACL-HLT. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of EMNLP. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is BERT really robust? Natural language attack on text classification and entailment. arXiv preprint arXiv:1907.11932. James Kennedy and Russell C Eberhart. 1997. A discrete binary version of the particle swarm algorithm. In IEEE International Conference on Systems, Man, and Cybernetics. J Zico Kolter and Marcus A Maloof. 2006. Learning to detect and classify malicious executables in the wild. Journal of Machine Learning Research, 7(Dec):2721–2744. Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2018. Deep text classification can be fooled. In Proceedings of IJCAI. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of ACL-HLT. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39– 41. 6076 Yilin Niu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Improved word representation learning with sememes. In Proceedings ACL. Mahamed G Omran, Andries P Engelbrecht, and Ayed Salman. 2004. Image classification using particle swarm optimization. In Recent advances in simulated evolution and learning, pages 347–365. World Scientific. Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. 2016. Crafting adversarial input sequences for recurrent neural networks. In Proceedings of MILCOM. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP. Danish Pruthi, Bhuwan Dhingra, and Zachary C Lipton. 2019. Combating adversarial misspellings with robust word recognition. In Proceedings of ACL. Fanchao Qi, Junjie Huang, Chenghao Yang, Zhiyuan Liu, Xiao Chen, Qun Liu, and Maosong Sun. 2019. Modeling semantic compositionality with sememe knowledge. In Proceedings of ACL. Yujia Qin, Fanchao Qi, Sicong Ouyang, Zhiyuan Liu, Cheng Yang, Yasheng Wang, Qun Liu, and Maosong Sun. 2019. Enhancing recurrent neural networks with sememes. arXiv preprint arXiv:1910.08910. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial examples through probability weighted word saliency. In Proceedings of ACL. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging nlp models. In Proceedings of ACL. Suranjana Samanta and Sameep Mehta. 2017. Towards crafting text adversarial samples. arXiv preprint arXiv:1707.02812. Motoki Sato, Jun Suzuki, Hiroyuki Shindo, and Yuji Matsumoto. 2018. Interpretable adversarial perturbation in input embedding space for text. In Proceedings of IJCAI. Yuhui Shi and Russell C Eberhart. 1998. Parameter selection in particle swarm optimization. In Proceedings of the International conference on evolutionary programming. Springer. Ana Paula Silva, Arlindo Silva, and Irene Rodrigues. 2012. Biopos: Biologically inspired algorithms for pos tagging. In Proceedings of the First International Workshop on Optimization Techniques for Human Language Technology. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP. Gianluca Stringhini, Christopher Kruegel, and Giovanni Vigna. 2010. Detecting spammers on social networks. In Proceedings of the 26th Annual Computer Security Applications Conference. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In Proceedings of ICLR. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing nlp. In Proceedings of EMNLP-IJCNLP. Wenqi Wang, Lina Wang, Benxiao Tang, Run Wang, and Aoshuang Ye. 2019a. A survey: Towards a robust deep neural network in text domain. arXiv preprint arXiv:1902.07285. Xiaosen Wang, Hao Jin, and Kun He. 2019b. Natural language adversarial attacks and defenses in word level. arXiv preprint arXiv:1909.06723. Laurence A Wolsey and George L Nemhauser. 1999. Integer and combinatorial optimization. John Wiley & Sons. Huangzhao Zhang, Hao Zhou, Ning Miao, and Lei Li. 2019a. Generating fluent adversarial examples for natural languages. In Proceedings of ACL. Lei Zhang, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Qun Liu, and Maosong Sun. 2019b. Multichannel reverse dictionary model. arXiv preprint arXiv:1912.08441. Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2018. Generating natural adversarial examples. In Proceedings of ICLR. A Human Evaluation Details For each adversarial example and original input, we ask three workers to choose a sentiment label from “Positive” and “Negative” for it and annotate its naturality score from {1, 2, 3}, which indicates “Machine generated”, “Not sure” and “Human written” respectively. We get the final sentiment labels of adversarial examples by voting. For example, if two workers annotate an example as “Positive” and one worker annotates it as “Negative”, we record its annotated label as “Positive”. We obtain the validity rate by calculating the percentage of the adversarial examples which are annotated with the same sentiment labels as corresponding original 6077 Victim Model Word Substitution Method Search Algorithm IMDB SST-2 SNLI %M %I PPL %M %I PPL %M %I PPL BiLSTM Embedding/LM Genetic 9.76 5.49 124.20 12.03 7.08 319.98 13.31 14.12 235.20 Greedy 7.84 5.23 112.84 10.63 3.71 287.45 12.60 9.42 205.50 PSO 7.00 8.07 113.99 12.78 6.68 339.46 14.82 10.82 255.69 Synonym Genetic 7.60 6.07 137.51 11.35 5.32 357.19 12.60 24.78 283.95 Greedy 6.47 4.49 115.31 10.25 4.65 317.27 12.32 21.37 311.04 PSO 5.42 3.45 109.27 10.55 5.12 331.96 12.56 20.83 307.51 Sememe Genetic 5.30 2.55 105.24 10.04 3.48 298.49 11.30 11.64 205.61 Greedy 4.89 1.80 97.49 9.36 2.79 276.53 12.11 10.95 218.72 PSO 3.71 1.44 88.98 9.06 3.17 276.53 11.72 11.08 222.40 BERT Embedding/LM Genetic 7.41 4.22 106.12 10.41 5.09 314.22 13.04 15.09 225.92 Greedy 5.53 4.45 97.21 9.23 3.04 276.42 11.80 13.73 206.46 PSO 5.97 7.98 101.66 11.64 6.70 343.89 14.22 14.43 245.95 Synonym Genetic 5.72 5.59 114.57 9.62 4.62 353.05 13.09 13.01 311.14 Greedy 4.49 4.48 98.60 8.51 4.12 316.30 11.60 11.65 285.00 PSO 4.63 4.33 100.81 9.20 4.72 337.82 12.99 13.32 302.83 Sememe Genetic 4.27 1.62 97.86 8.34 2.05 292.16 11.59 8.84 217.75 Greedy 3.97 1.79 92.31 8.14 2.21 279.35 10.09 7.81 207.71 PSO 3.69 1.57 90.74 8.24 2.03 289.94 11.73 10.14 223.22 Table 10: Automatic evaluation results of adversarial example quality. “%M”, “%I” and “PPL” indicate the modification rate, grammatical error increase rate and language model perplexity respectively. sentences. For each adversarial example, we use the average of the naturality scores given by three workers as its final naturality score. B Automatic Evaluation Results of Adversarial Example Quality We present the automatic evaluation results of adversarial example quality of all the combination models in Table 10. We can find that Sememe and PSO obtain higher overall adversarial example quality than other word substitution methods and adversarial example search algorithms, respectively. C Adjustment of Hyper-parameters of Embedding/LM+Genetic The word substitution strategy Embedding/LM has three hyper-parameters: the number of the nearest words N, the euclidean distance threshold of word embeddings δ and the number of words retained by the language model filtering K. For original Embedding/LM+Genetic, N = 8, δ = 0.5 and K = 4, which are the same as Alzantot et al. (2018). To increase the attack success rates, we change these hyper-parameters to N = 20, δ = 1 and K = 10. D Statistical Significance of Human Evaluation Results We conduct Student’s t-tests to measure the statistical significance between the difference of human evaluation results of different models. The results of attack validity and adversarial example naturality are shown in Table 11 and 12, respectively. “Embedding/LM+Genetic*” refers to the Embedding/LM+Genetic model with adjusted hyper-parameters. E Case Study We display some adversarial examples generated by the baseline attack models and our attack model on IMDB, SST-2 and SNLI in Table 13, 14 and 15 respectively. 6078 Victim Model Model 1 Model 2 p-value Significance Conclusion Bi-LSTM Sememe+PSO Embedding/LM+Genetic 0.14 Not Significant = Sememe+PSO Synonym+Greedy 0.37 Not Significant = Sememe+PSO Embedding/LM+Genetic* 0.01 Significant > Original Input Embedding/LM+Genetic 9.52e-10 Significant > Original Input Synonym+Greedy 1.78e-6 Significant > Original Input Sememe+PSO 3.55e-7 Significant > Original Input Embedding/LM+Genetic* 2.42e-13 Significant > BERT Sememe+PSO Embedding/LM+Genetic 0.29 Not Significant = Sememe+PSO Synonym+Greedy 0.12 Not Significant = Sememe+PSO Embedding/LM+Genetic* 5.86e-4 Significant > Original Input Embedding/LM+Genetic 2.19e-5 Significant > Original Input Synonym+Greedy 3.33e-9 Significant > Original Input Sememe+PSO 1.78e-6 Significant > Original Input Embedding/LM+Genetic* 2.30e-15 Significant > Table 11: The Student’s t-test results of attack validity of different models, where “=” means “Model 1” performs as well as “Model 2” and “>” means “Model 1” performs better than “Model 2”. Victim Model Model 1 Model 2 p-value Significance Conclusion Bi-LSTM Sememe+PSO Embedding/LM+Genetic 0.48 Not Significant = Sememe+PSO Synonym+Greedy 0.41 Not Significant = Human Authored Embedding/LM+Genetic 0.14 Not Significant = Human Authored Synonym+Greedy 0.10 Not Significant = Human Authored Sememe+PSO 0.15 Not Significant = BERT Sememe+PSO Embedding/LM+Genetic 0.31 Not Significant = Sememe+PSO Synonym+Greedy 0.31 Not Significant = Human Authored Embedding/LM+Genetic 0.06 Not Significant = Human Authored Synonym+Greedy 0.06 Not Significant = Human Authored Sememe+PSO 0.08 Not Significant = Table 12: The Student’s t-test results of adversarial example naturality of different models, where “=” means “Model 1” performs as well as “Model 2”. 6079 IMDB Example 1 Original Input (Prediction = Positive) In my opinion this is the best oliver stone flick probably more because of influence than anything else. Full of dread from the first moment to its dark ending. Embedding/LM+Genetic (Prediction = Negative) In my view this is the higher oliver stone flick presumably more because of influence than anything else. Total of anxiety from the first moment to its dark ending. Synonym+Greedy (Prediction = Negative) In my opinion this embody the respectable oliver stone flick probably more because of influence than anything else. Broad of dread from the first moment to its dark ending. Sememe+PSO (Prediction = Negative) In my opinion this is the bestest oliver stone flick probably more because of influence than anything else. Ample of dread from the first moment to its dark ending. IMDB Example 2 Original Input (Prediction = Negative) One of the worst films of it’s genre. The only bright spots were lee showing some of the sparkle she would later bring to the time tunnel and batman. Embedding/LM+Genetic (Prediction = Positive) One of the biggest films of it’s genre. The only glittering spots were lee showing some of the sparkle she would afterwards bring to the time tunnel and batman. Synonym+Greedy (Prediction = Positive) One of the tough films of it’s genre. The only bright spots follow lee present some of the spark she would later bring to the time tunnel and batman. Sememe+PSO (Prediction = Positive) One of the seediest films of it’s genre. The only shimmering spots were lee showing some of the sparkle she would later bring to the time tunnel and batman. Table 13: Adversarial examples generated by two baseline methods and our model on IMDB. 6080 SST-2 Example 1 Original Input (Prediction = Positive) Some actors have so much charisma that you ’d be happy to listen to them reading the phone book. Embedding/LM+Genetic (Prediction = Negative) Some actors have so much charisma that you ’d be cheery to listen to them reading the phone book. Synonym+Greedy (Prediction = Negative) Some actors have so much charisma that you ’d be happy to listen to them take the phone book. Sememe+PSO (Prediction = Negative) Some actors have so much charisma that you ’d be jovial to listen to them reading the phone book. SST-2 Example 2 Original Sentence (Prediction = Negative) The movie ’s biggest is its complete and utter lack of tension. Embedding/LM+Genetic (Prediction = Positive) The movie ’s biggest is its complete and utter absence of stress. Synonym+Greedy (Prediction = Positive) The movie ’s great is its complete and utter want of tension. Sememe+PSO (Prediction = Positive) The movie ’s biggest is its complete and utter dearth of tension. Table 14: Adversarial examples generated by two baseline methods and our model on SST-2. SNLI Example 1 Premise: A smiling bride sits in a swing with her smiling groom standing behind her posing for the male photographer while a boy holding a bottled drink and another boy wearing a green shirt observe . Original Input(Prediction = Entailment) Two boys look on as a married couple get their pictures taken. Embedding/LM+Genetic (Prediction = Contradiction) Two man stare on as a wedding couple get their pictures taken. Synonym+Greedy (Prediction = Contradiction) Two boys look on as a married couple puzzle their pictures taken. Sememe+PSO (Prediction = Contradiction) Two boys stare on as a wedding couple get their pictures taken. SNLI Example 2 Premise: A dog with a purple leash is held by a woman wearing white shoes . Original Input (Prediction = Entailment) A man is holding a leash on someone else dog. Embedding/LM+Genetic (Prediction = Contradiction) A man is holding a leash on someone further dog. Synonym+Greedy (Prediction = Contradiction) A humans is holding a leash on someone else dog. Sememe+PSO (Prediction = Contradiction) A man is holding a leash on someone else canine. Table 15: Adversarial examples generated by two baseline methods and our model on SNLI.
2020
540
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6081–6094 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6081 Benchmarking Multimodal Regex Synthesis with Complex Structures Xi Ye Qiaochu Chen Isil Dillig Greg Durrett Department of Computer Science The University of Texas at Austin {xiye,qchen,isil,gdurrett}@cs.utexas.edu Abstract Existing datasets for regular expression (regex) generation from natural language are limited in complexity; compared to regex tasks that users post on StackOverflow, the regexes in these datasets are simple, and the language used to describe them is not diverse. We introduce STRUCTUREDREGEX, a new regex synthesis dataset differing from prior ones in three aspects. First, to obtain structurally complex and realistic regexes, we generate the regexes using a probabilistic grammar with pre-defined macros observed from real-world StackOverflow posts. Second, to obtain linguistically diverse natural language descriptions, we show crowdworkers abstract depictions of the underlying regex and ask them to describe the pattern they see, rather than having them paraphrase synthetic language. Third, we augment each regex example with a collection of strings that are and are not matched by the ground truth regex, similar to how real users give examples. Our quantitative and qualitative analysis demonstrates the advantages of STRUCTUREDREGEX over prior datasets. Further experimental results using various multimodal synthesis techniques highlight the challenge presented by our dataset, including non-local constraints and multi-modal inputs.1 1 Introduction Regular expressions (regexes) are known for their usefulness and wide applicability, and yet they are hard to understand and write, even for many programmers (Friedl, 2006). Recent research has therefore studied how to construct regexes from natural language (NL) descriptions, leading to the emergence of NL-to-regex datasets including 1Code and data available at https://www.cs. utexas.edu/˜xiye/streg/. SepTemp concat( Seg , Delimiter , Seg , Delimiter , Seg ) rep(<num>,3) <-> rep(<num>,3) rep(<num>,4) <-> Ground Truth Regex Figure Examples 012-345-6789 341-415-0341 210-543-071 210-521-73427 positive: negative: Natural Language Description I want three hyphen-separated numbers. The first and second numbers have 3 digits while the last one has 4 digits. Figure 1: Our dataset collection process. A regex is sampled from our grammar, then we render an abstract figure and generate distinguishing positive/negative examples. We present the figure and examples to crowdworkers to collect natural language descriptions. KB13 (Kushman and Barzilay, 2013) and NLTURK (Locascio et al., 2016). However, KB13 is small in size, with only 814 NL-regex pairs with even fewer distinct regexes. Locascio et al. (2016) subsequently employed a generate-and-paraphrase procedure (Wang et al., 2015) to create the larger NL-TURK dataset. However, the regexes in this dataset are very simple, and the descriptions are short, formulaic, and not linguistically diverse because of the paraphrasing annotation procedure (Herzig and Berant, 2019). As a result, even when models achieve credible performance on these datasets, they completely fail when evaluated on the STACKOVERFLOW dataset (Ye et al., 2019), a real-world dataset collected from users seeking help on StackOverflow. The limited size of this dataset (only 62 NL-regex pairs) makes it 6082 (a) I need to validate the next pattern: starts with “C0” and finish with 4 digits exactly. and(startwith(<C0>)),endwith(rep(<num>,4))) (b) i need regular expression for : one or two digits then ”.” and one or two digits. concat(reprange(<num>,1,2),concat(<.>,reprange(<num>,1,2))) (c) The input will be in the form a colon (:) separated tuple of three values. The first value will be an integer, with the other two values being either numeric or a string. concat(repatleast(<num>,1),rep(concat(<:>,or(repatleast(<let>,1), repatleast(<num>,1))),2)) Figure 2: Examples of complex regexes from STACKOVERFLOW. Each regex can be viewed as a set of components composed with a high-level template. Regex (a), for example, can be as viewed the intersection of two constraints specifying the characteristics of the desired regex. (rep means repeat). unsuitable for large-scale training, and critically, the complexity of regexes it features means that regex synthesis systems must leverage the userprovided positive and negative examples (strings that should be matched or rejected by the target regex) in order to do well. To enable the development of large-scale neural models in this more realistic regex setting, we present STRUCTUREDREGEX, a new dataset of English language descriptions and positive/negative examples associated with complex regexes. Using a new data collection procedure (Figure 1), our dataset addresses two major limitations in NL-TURK. First, we generate our regexes using a structured probabilistic grammar which includes macro rules defining high-level templates and constructions that involve multiple basic operators. These grammar structures allow us to sample more realistic regexes, with more terminals and operators, while avoiding vacuous regexes. By contrast, the random sampling procedure in NL-TURK leads to simple regexes, and attempting to sample more complex regexes results in atypical regex structures or even contradictory regexes that do not match any string values (Ye et al., 2019). Second, to achieve more realistic language descriptions, we prompt Turkers to write descriptions based on abstract figures that show the desired regexes. We design a set of visual symbols and glyphs to draw a given regex with minimal textual hints. We thereby avoid priming Turkers to a particular way of describing things, hence yielding more linguistically diverse descriptions. Using this methodology, we collect a total of 3,520 English descriptions, paired with ground truth regexes and associated positive/negative examples. We conduct a comprehensive analysis and demonstrate several linguistic features present in our dataset which do not occur in past datasets. We evaluate a set of baselines, including grammarbased methods and neural models, on our dataset. In addition, we propose a novel decoding algorithm that integrates constrained decoding using positive/negative examples during inference: this demonstrates the potential of our dataset to enable work at the intersection of NLP and program synthesis. The performance of the best existing approach on STRUCTUREDREGEX only reaches 37%, which is far behind 84% on NL-TURK. However, this simple model can nevertheless solve 13% of the STACKOVERFLOW dataset, indicating that further progress on this dataset can be useful for real-world scenarios. 2 Structured Regex Generation Process We first describe the structured generative process we adopt to produce the regexes in our dataset. For better readability, we denote regexes using a domain specific language (DSL) similar to regex DSLs in prior work (Locascio et al., 2016; Ye et al., 2019). Our DSL has the same expressiveness as a standard regular language and can be easily mapped back to standard regular expressions.2 To collect the NL-TURK dataset, Locascio et al. (2016) sampled regexes using a hand-crafted grammar similar to a standard regex DSL. However, regexes sampled from this process can easily have conflicts (e.g. and(<let>,<num>)) or redundancies (e.g. or(<let>,<low>)). One solution to this problem is rejection sampling, but this still does not yield regexes with compositional, realworld structure. We show three prominent types of composition observed from STACKOVERFLOW in Figure 2. Each regex above is built by assembling several sub-regexes together according to a highlevel template: regex (a) is the intersection of two base regexes expressing constraints, regex (b) is a sequence of three simple parts, and regex (c) is a 2Refer to the appendix for details of our DSL. 6083 CatTemp concat( Comp , Comp , Comp ) reprange( Expr ,1,2) Literal reprange( Expr ,1,2) · · · · · · · · · one or two digits then “.” and one or two digits SepTemp concat( Seg , Delimiter , Seg , Delimiter , Seg ) CatTemp IntTemp IntTemp Const Const · · · · · · · · · · · · · · · three delimited values, first will be an integer, with other two being either numeric or a string IntTemp and( Cons , Cons ) startwith( Expr ) endwith( Expr ) · · · · · · starts with “C0” and end with 4 digits Cons start[end]with(Expr) | not(start[end]with(Expr)) # must (not) start/end with contain(Expr) | not(contain(Expr)) # must (not) contain rep(<any>,k) | repatleast(<any>,k) |reprange(<any>,k,k) # length constraints AdvStartwithCons | AdvEndwithCons # adversative macro (e.g., start with capitals except A) CondContainCons # conditional macro. (e.g. letter, if contained, must be after a digit) Comp Literal | or(Literal,Literal,...) # literals like digits, letters, strings, or set of literals. rep(Expr,k) | repatleast(Expr,k) | reprange(Expr,k,k) # e.g, 3 digits, 2 - 5 letter, etc. optional(Comp) # components can be optional. Figure 3: Examples of our top-level templates and how they cover the three regexes in Figure 2, and overview of sub-regexes (in table) that can possibly be derived from Cons and Comp. Expr as a category here indicates various different constrained sets of sub-regexes. More detail about this structure is available in the full grammar in the appendix. A string of numbers and digits that must start with a number except “0”. IntTemp and( Cons , Cons ) ConsistOfCons AdvStartwithCons repatleast( LiteralSet ,1) and(startwith( Literal ),not(startwith( Literal ))) or(<num>,<let>) <num> <0> and(repatleast(or(<num>,<let>),1),and(startwith(<num>),not(startwith(<0>)))) Figure 4: The generation of a deep and complex regex using our grammar. Here, AdvStartwithCons is a macro rule that yields a complex sub-tree with an adversative constraint. list of three segments delimited by a constant. We observe that these three templates actually capture a wide range of possible regex settings. The first, for example, handles password validation-esque settings where we have a series of constraints to apply to a single string. The second and third reflect matching sequences of fields, which may have shared structured (regex (c)) or be more or less independent (regex (b)). 2.1 Structured Grammar To generate realistic regexes in these forms, we rely on a structured hand-crafted grammar. The top level of our grammar specifies three templates distilled from STACKOVERFLOW examples: INTERSECTION, CONCATENATION, and SEPARATION, which mimic patterns of real-world regexes. In Figure 3, we show how regexes in Figure 2 can be derived from our templates. The INTERSECTION template (left) intersects several base constraints with the and operator; the CONCATENATION template (middle) concatenates several base components with the concat operator. SEPARATION (right) is a more complex type, generating a list of constant-separated INTERSECTION or CONCATENATION regexes which may be identical or share common components. Across all templates, the components are subregexes falling into a few high-level types (notably Cons and Comp), which are depth-limited to control the overall complexity (discussed in Appendix B.2). To make these component regexes more realistic as well, we design several macro rules that expand to more than one operator. The macros are also extracted from real-world examples and capture complex relations like adversative (Figure 4) and conditional (Table 2) relations. Although our hand-crafted grammar does not cover every possible construction allowed by the regular expression language, it is still highly expressive. Based on manual analysis, our grammar covers 80% of the real-world regexes in STACKOVERFLOW, whereas the grammar of NL-TURK only covers 24% (see Section 4). Note that some constructions apparently omitted by our grammar are equivalent to ones supported by our grammar: e.g., we don’t allow a global startwith constraint in the CONCATENATION template, but this constraint can be expressed by having the first component of the concatenation incorporate the desired 6084 constraint. 2.2 Sampling from the Regex Grammar Although our structural constraints on the grammar already give rise to more realistic regexes, we still want to impose further control over the generative process to mimic properties of real-world regexes. For example, there are sometimes repeating components in CONCATENATION regexes, such as regex (b) from Figure 2. We encourage such regexes by dynamically modifying the probability of applying the grammar rules while we are expanding a regex based on the status of the entire tree that has currently been induced. For example, suppose we are building regex (b) from Figure 2, and suppose we currently have concat(reprange(<num>, 1,2),concat(<.>,Comp)), where Comp is a non-terminal that needs to be expanded into a sub-regex. Because we already have reprrange(<num>,1,2) and <.> in the current tree, we increase the probability of expanding Comp to generate these particular two sub-regexes, allowing the model to copy from what it has generated before.3 In addition to copying, we also change the sampling distribution when sampling children of certain grammar constructs to control for complexity and encourage sampling of valid regexes. For example, the child of a startwith expression will typically be less complex and compositional than the child of a Comp expression, so we tune the probabilities of sampling compositional AST operators like or appropriately. 3 Dataset Collection 3.1 Positive/Negative Example Generation The STACKOVERFLOW dataset (Ye et al., 2019) shows that programmers often provide both positive and negative examples to fully convey their intents while specifying a complicated regex. Therefore, we augment our dataset with positive and negative examples for each regex. Our model will use these examples to resolve ambiguity present in the natural language descriptions. However, the examples can also help Turkers to better understand the regexes they are describing during the data collection process. 3This component reuse bears some similarity to an Adaptor Grammar (Johnson et al., 2007). However, we modify the distributions in a way that violates exchangeability, making it not formally equivalent to one. positive: negative: A1234 negative: a123 concat(<low>,repatleast(<num>,4)) concat(<cap>,repatleast(<num>,4)) concat(<low>,rep(<num>,3)) perturb perturb a1234 b5678 Figure 5: The process of generating distinguishing negative examples by minorly perturbing each of the subregexes in the ground truth regex. We aim to generate diverse and distinguishing examples similar to human-written ones, which often include corner cases that differentiate the ground truth regex from closely-related spurious ones. We can achieve this by enumerating examples that cover the states in the deterministic finite automaton (DFA) defined by the given regex4 and reject similar but incorrect regexes. We employ the Automaton Library (Møller, 2017) to generate the examples in our work. Positive examples are generated by stochastically traversing the DFA. For negative examples, randomly sampling examples from the negation of a given regex will typically produce obviously wrong examples and not distinguishing negative examples as desired. Therefore, we propose an alternative approach shown in Figure 5 for generating negative examples. We apply minor perturbations to the ground truth regex to cause it to accept a set of strings that do not intersect with the set recognized by the original regex. The negative examples can be derived by sampling a positive string from one of these “incorrect” regexes. For each regex in our dataset, we generate 6 positive examples and 6 negative examples. These numbers are comparable to the average number of examples provided by STACKOVERFLOW users. 3.2 Figure Generation As stated previously, we avoid the paradigm of asking users to paraphrase machine-generated regex descriptions, as this methodology can yield formulaic and artificial descriptions. Instead, we ask users to describe regexes based on figures that illustrate how the regex is built. We show one example figure of a SEPARATION regex in Figure 6. In general, we abstract a given regex as a series of blocks linked with textual descriptions of its content and constraints. For instance, startwith and endwith are denoted by shading the head or tail of a block. By linking multiple blocks to shared tex4Recall that although our DSL is tree-structured, it is equivalent in power standard regexes, and hence our expressions can be mapped to DFAs. 6085 Three comma separated segments. The first segment is 2 digits. The other two consist of digits or letters but must start with a letter and contain “0”. Figure 6: An example automatically generated figure of a SEPARATION regex and corresponding description annotated by a Turker. tual descriptions, we hope to encourage Turkers to notice the correlation and write descriptions accordingly. Finally, we have different textual hints for the same concept: “contain x” in Figure 6 may appear as “have x” elsewhere. These figures are rendered for each regex in the MTurk interface using JavaScript. 3.3 Crowdsourcing Task We collected the STRUCTUREDREGEX dataset on Amazon Mechanical Turk (MTurk). For each HIT, the Turkers are presented with a regex figure and a set of positive/negative examples. Then, they are asked to write down several sentences describing the regex, as well as one additional positive example that matches the regex. We only accept a description if the submitted positive example is matched by the ground-truth regex; this helps filter out some cases where the Turker may have misunderstood the regex. We show an example HIT in Appendix C. In early pilot studies, we explored other ways of abstractly explaining regexes to Turkers, such as providing more examples and an associated set of keywords, yet none of these methods led to users generating sufficiently precise descriptions. By contrast, our figures fully specify the semantics of the regexes while only minimally biasing Turkers towards certain ways of describing them. We generated 1,200 regexes (400 from each template), assigned each regex to three Turkers, and collected a total of 3,520 descriptions after rejecting HITs. In general, each Turker spent 2 to 3 minutes on each of the HITs, and we set the reward to be $0.35. The total cost of collecting our dataset was $1,512, and the average cost for each description is $0.43. Dataset KB13 TURK STREG SO size 824 10000 3520 62 #. unique words 207 557 873 301 avg. NL length 8 12 33 25 avg. reg size 5 5 15 13 avg. reg depth 2.5 2.3 6.0 4.0 Table 1: Statistics of our dataset and prior datasets. Compared to KB13 and NL-TURK, our dataset contain more diverse language and more complex regexes, comparable to the real STACKOVERFLOW dataset. Quality To ensure the quality of collected responses, we require the Turkers to first take a qualification test which simply requires describing one regex that we have specified in advance. We then check that the description for this regex is sufficiently long and that it contains enough of our manually-written correct base regex concepts. We manually observed from the responses that various styles were adopted by different Turkers for describing the same type of regexes. For instance, given regex (b) in Figure 2, some Turkers tend to enumerate every component in order, describing it as one or two digits followed by a dot followed by one or two digits; some other Turkers prefer grouping identical components and describing the components out of order, describing it as the first and third parts are one or two digits, and the second part is a dot. These distinct styles lead to a diversity of linguistic phenomena, which is further analyzed in Section 4. Because we aim for high linguistic diversity in our dataset, we prohibited a single Turker from doing more than 300 HITs. Furthermore, we found anecdotal evidence that the task was engaging for users, which we took as a positive signal for generation quality. We received messages about our HITs from some Turkers telling us that our HIT was “really interesting” and they “enjoyed doing it.” Splitting the Dataset Since our dataset consists of natural language descriptions written by annotators, there is possibly bias introduced by training and testing on the same annotators (Geva et al., 2019). Therefore, in addition to the standard Train/Development/Test splits, we also form a Test-E (excluded) which consists only of annotations from annotators unseen in the training set. We ensure that Train, Dev, and both two test sets (Test and Test-E) have mutually exclusive regexes from each other (Test and Test-E can have common regexes), and Test-E is annotated entirely by 6086 TURK STREG Example NL from STREG multi-sentence 0% 70% The string has 6 or more characters. The string must start with a digit. ambiguity 2.3% 20.6% The sequence starts with a letter followed by 2 numbers. abstraction 0% 13.3% The first part of a single string consists of 1 or more “0” followed by 2 capital letters. The second part of the string must follow the same rules. non-local constraint 0% 16.7% There are 3 dash separated strings. The first is 1 to 4 “A” . The second and third consist of 1 or 2 “x” followed by 1 to 3 numbers and 2 letters. coreference 5.1% 29.7% The string starts with a number. It ends with 1 to 4 lower or capital letters. condition relation 0% 3.5% If there is a capital letter it must be after a digit. adversative relation 0% 3.7% The string start with capital letter but it should not be a “A”. Table 2: Qualitative analysis on 150 descriptions from NL-TURK and our dataset (50 from each template). We show the percentage of examples containing each phenomenon. Our dataset features more of these challenging linguistic phenomena compared to prior synthetic datasets. a disjoint set of annotators from those who annotated the training or development set. The final size of the splits are: 2173 (61.7%), 351 (10.0%), 629 (17.9%), 367 (10.4%). 4 Dataset Analysis We demonstrate the advantages of our dataset over prior datasets (Kushman and Barzilay, 2013; Locascio et al., 2016) through both quantitative and qualitative analysis. We list the key statistics of our dataset as well as KB13 and NL-TURK for comparison in Table 1. Compared to past synthetic datasets, our dataset has more diverse and sophisticated language. The average NL length of our dataset is twice as long as that of NL-TURK, and the descriptions contain many more unique words even though our dataset contains fewer regexes. In addition, our dataset contains more complex regexes that are closer to the complexity of real-world regexes found on StackOverflow, whereas regexes in previous datasets are significantly simpler. Manual Analysis We further manually analyze 150 descriptions from past synthetic datasets and our dataset. Table 2 lists the proportion of descriptions containing each of several phenomena: examples that are multi-sentence, examples with clear syntactic or semantic ambiguity, examples using abstraction to refer to different parts of the regex, examples invoking non-local constraints, and examples with nontrivial coreference. The language from our dataset is organic and diverse, since we allow Turkers to compose their own descriptions. We find that macros and complex constraints in our structured grammar can successfully trigger interesting language. For instance, the abstraction reflects repetition in concatenation regexes, and the bottom part of Table 2 reflects the KB13 TURK STREG Word Coverage 27.1% 34.4% 55.9% Regex Coverage 23.5% 23.5% 84.3% Table 3: Distribution mismatch analysis with respect to STACKOVERFLOW on past datasets and our dataset. Our dataset covers significantly more words and regexes, and is closer to the real-world dataset. complex macros. Furthermore, the complex and ambiguous language highlights the necessity of including examples together with language to fully specify a regex. For instance, ambiguity is common in our descriptions. However, many of the ambiguous descriptions can be resolved with the help of examples. Concretely, the description for ambiguity from Table 2 can be easily interpreted as startwith(concat(<let>, repeat(<num>,2))) while the ground truth is concat(<let>,repeat(<num>,2)). By simply adding one negative example, “a123”, the ground truth can be distinguished from the spurious regex. Comparison to STACKOVERFLOW Since our goal was to produce realistic regex data, we analyze how well the real-world STACKOVERFLOW dataset is covered by data from STRUCTUREDREGEX compared to other datasets (Kushman and Barzilay, 2013; Locascio et al., 2016). We ignore 11 of the STACKOVERFLOW examples that involve the high-level decimal concept, which is beyond the scope of our dataset and past synthetic datasets. In addition, we anonymize all the constants and integer parameters (e.g., repeat(<x>,9) is anonymized as repeat(const,int)). The statistics (Table 3) suggest that our dataset is more highly similar to real-world regexes on StackOverflow, especially in terms of regex distribution. 6087 5 Methods We evaluate the accuracy of both existing grammar-based approaches and neural models, as well as a novel method that targets the multimodal nature of our dataset. Existing Approaches SEMANTIC-UNIFY (Kushman and Barzilay, 2013) is a grammarbased approach that relies on a probabilistic combinatory categorical grammar to build the regexes. DEEPREGEX (Locascio et al., 2016) directly translates natural language descriptions into regexes using a seq-to-seq model enhanced with attention (Luong et al., 2015) without considering examples. We re-implemented DEEPREGEX with slightly different hyperparameters; we refer to our re-implementation as DEEPREGEX (OURS). DEEPREGEX+FILTER (Ye et al., 2019) adapts DEEPREGEX so as to take examples into account by simply filtering the k-best regexes based on whether a regex accepts all the positive examples and rejects all the negative ones. Example-Guided Decoding Although DEEPREGEX+FILTER is able to take advantage of positive and negative string examples, these examples are completely isolated in the training and inference phase. We propose to make use of examples during inference with the technique of overand under- approximation (Lee et al., 2016) used in the program synthesis domain. The core idea of our approach is that, for each partially completed regex during decoding, we use the approximation technique to infer whether the regex can possibly match all positive or reject all negative examples. If this is impossible, we can prune this partial regex from our search. This approach allows us to more effectively explore the set of plausible regexes without increasing the computational budget or beam size. As an example, consider the ground truth regex and(startwith(<low>),endwith(<num>)) with one corresponding positive example “00x”. Suppose that the decoder has so far generated the incomplete regex and(startwith(<cap>),. To produce a syntactically valid regex, the decoder needs to generate a second argument for the and. By appending star(<any>) as its second argument, we can see that there is no completion here that will accept the given positive example, allowing us to reject this regex from the beam. Under-approximation works analogously, Approach KB13 TURK STREG SEMANTIC-UNIFY 65.5% 38.6% 1.8% DEEPREGEX (Locascio et al.) 65.6% 58.2% − DEEPREGEX (Ours) 66.5% 60.2% 24.5% DEEPREGEX + FILTER 77.7% 83.8% 37.2% Table 4: DFA-equivalent accuracy on prior datasets and our dataset. The performance on our dataset using any model is much lower than the performance on existing datasets. completing regexes with maximally restrictive arguments and checking that negative examples are rejected. We integrate the aforementioned technique in the beam decoding process by simply pruning out bad partial derivations at each timestep. We refer to this approach as DEEPREGEX + APPROX. 6 Experiments 6.1 Comparison to Prior Datasets We evaluate the baseline models on KB13, NLTURK, and our dataset (Table 4). The results show that our dataset is far more challenging compared to existing datasets. Traditional grammar baseline can scarcely solve our dataset. The best baseline, DEEPREGEX + FILTER, achieves more than 77.7% on KB13 and 83.8% NL-TURK when these datasets are augmented with examples, but can only tackle 37.2% of our dataset. Additionally, the comparison between DEEPREGEX and DEEPREGEX + FILTER demonstrates that simply filtering the outputs of neural model leads to a substantial performance boost on all the datasets. This supports the effectiveness of the way we specify regexes, i.e., using both natural language descriptions and examples. 6.2 Detailed Results on STRUCTUREDREGEX Table 5 shows the detailed accuracy regarding different regex templates on both Test and Test-E sets. Our DEEPREGEX + APPROX achieves best accuracy with 5.6% and 7.9% improvement over DEEPREGEX + FILTER on Test and Test-E, respectively, since it can leverage examples more effectively using over- and under- approximations during search. Accuracy varies on different types of regexes. Generally, models perform the best on concatenation regexes, slightly worse on intersection regexes, and the worst on separation regexes. Concatenation regexes usually have straightforward 6088 Approach Test Test-E Agg Int Cat Sep Agg Int Cat Sep SEMANTIC-UNIFY 2.1% 2.9% 3.1% 0.0% 1.4% 1.6% 2.4% 0.0% DEEPREGEX (Ours) 27.8% 20.7% 42.2% 19.2% 18.8% 18.0% 23.6% 14.8% DEEPREGEX + FILTER 42.6% 38.9% 55.2% 32.3% 28.1% 32.0% 32.5% 19.7% DEEPREGEX + APPROX 48.2% 45.7% 59.6% 37.9% 36.0% 39.3% 40.7% 27.9% Table 5: Results for models trained and tested on STRUCTUREDREGEX. Using the examples (the latter two methods) gives a substantial accuracy boost, and DEEPREGEX + APPROX is better than the post-hoc FILTER method, but still only achieves 48.2% accuracy on Test and 36.0% on Test-E. Separation regexes are more difficult than the other two classes, and performance for all models drops on Test-E. Train Model Acc Equiv Consistent Set DEEPREGEX Found Found TURK w/o Example 0.0% 0.0% 7.8% STREG + FILTER 9.8% 9.8% 21.6% STREG +APPROX 13.7% 17.6% 37.7% Table 6: The performance on STACKOVERFLOW-51 with models trained on NL-TURK and our dataset. We report the fraction of examples where a DFA-equivalent regex is found (Acc), where a DFA-equivalent regex is found in the k-best list, and where a regex consistent with the examples appears in the k-best list. Models trained on NL-TURK do not perform well in this setting, while our models can solve some examples. descriptions in the form of listing simple components one by one. Intersection descriptions can be more complicated because of the high-level macros specified by our grammar. Separation descriptions are the most complex ones that often involve coreferences and non-local features. Performance on Test-E is 12% lower than on Test for the models haven’t been trained on patterns of the unseen annotators. 6.3 Transferability Results Finally, we investigate whether a model trained on our dataset can transfer to the STACKOVERFLOW dataset. As in Section 4, we ignore instances requiring the decimal concept and only evaluate on the subset of STACKOVERFLOW with 51 instances. We compare our dataset with NLTURK for this task. As shown in Table 6, DEEPREGEX trained on NL-TURK completely fails on STACKOVERFLOW and even fails to predict reasonable regexes that are consistent with the examples. This is caused by the fact that the NLTURK dataset contains formulaic descriptions and shallow regexes that are not representative of realworld tasks. DEEPREGEX trained on our dataset can at least achieve 9.8% accuracy on STACKOVERFLOW dataset because the English descriptions in this dataset better match the desired task. Our DEEPREGEX + APPROX model successfully solves 13.7% and finds consistent regexes for 38% of the tasks, which is credible given that the performance of the same model on Test-E set is only 30%. Some additional challenges in STACKOVERFLOW are instances involving large numbers of constants or slightly more formal language since the SO users are mainly programmers. However, we believe the transfer results here show that improved performance on our dataset may transfer to STACKOVERFLOW as well, since some of the challenges also present in our Test-E set (e.g., unseen language). 6.4 Human Performance Estimate It is difficult to hire Turkers to estimate a human performance upper bound, because our task requires reckoning with both the descriptions and positive/negative examples. Unlike many NLP tasks where an example with ambiguous language is fundamentally impossible, here the examples may actually still allow a human to determine the correct answer with enough sleuthing. But to perform this task, crowdworkers would minimally need to be trained to understand the DSL constructs and how they compose, which would require an extensive tutorial and qualification test. To do the task well, Turkers would need a tool to do on-the-fly execution of their proposed regexes on the provided examples. We instead opted for a lighter-weight verification approach to estimate human performance. We adopted a post-editing approach on failure cases from our model, where we compared the model’s output with the input description and examples and corrected inconsistencies. Specifically, we sample 100 failure examples from the test set (Test plus Test-E) and manually assess the failure cases. We find 78% of failure cases contain descriptions that describe all com6089 ponents of the target regexes, but our seq-to-seq models are insufficient to capture these. There are truly some mis- or under-specified examples, such as not mentioning the optionality of one component or mistaking “I” for “l” in constants. An additional 9% (out of 100) of the errors could be fixed using the provided examples. This leaves roughly 13% of failure cases that are challenging to solve. Considering that the model already achieves 43.6% accuracy on the test set, we estimate human performance is around 90%.5 7 Related Work Data collection in semantic parsing Collecting large-scale data for semantic parsing and related tasks is a long-standing challenge (Berant et al., 2013; Wang et al., 2015). Wang et al. (2015) proposed the generate-and-paraphrase framework, which has been adopted to collect datasets in various domains (Locascio et al., 2016; Ravichander et al., 2017; Johnson et al., 2017). However, this process often biases annotators towards using formulaic language (Ravichander et al., 2017; Herzig and Berant, 2019). Similar to our work, past work has sought to elicit linguistically diverse data using visual elements for semantic parsing (Long et al., 2016), natural language generation (Novikova et al., 2016), and visual reasoning (Suhr et al., 2017, 2019). However, for these other tasks, the images used are depictions of an inherently graphical underlying world state; e.g., the NLVR dataset (Suhr et al., 2017) and NLVR2 (Suhr et al., 2019) are based on reasoning over the presented images, and the Tangrams dataset (Long et al., 2016) involves describing shape transformations. By contrast, regexes are typically represented as source code; there is no standard graphical schema for depicting the patterns they recognize. This changes the properties of the generated descriptions, leading to higher levels of compositionality and ambiguity because what’s being described is not naturally an image. Program and regex synthesis Recent research has tackled the problem of program synthesis from examples (Gulwani, 2011; Gulwani and Jain, 5In addition, the first author manually wrote regexes for 100 randomly sampled examples and achieved an accuracy of 95% (higher than the estimate). However, the author also has a strong prior over what synthetic regexes are likely to be in the data. 2017; Alur et al., 2013; Wang et al., 2016; Feng et al., 2018; Devlin et al., 2017; Nye et al., 2019). A closer line of work to ours uses both examples and natural language input (Yaghmazadeh et al., 2017; Ye et al., 2019; Andreas et al., 2018), which involves fundamentally different techniques. However, our work does not rely on the same sort of program synthesizer to build final outputs (Yaghmazadeh et al., 2017; Ye et al., 2019). Moreover, Andreas et al. (2018) only use language at train time, whereas we use NL at both train and test time. Finally, while several datasets on regex synthesis specifically have been released (Kushman and Barzilay, 2013; Locascio et al., 2016), we are the first to incorporate examples in the dataset. Other methods have been proposed to parse natural language into regex via rule-based (Ranta, 1998), grammar-based (Kushman and Barzilay, 2013), or neural models (Locascio et al., 2016; Zhong et al., 2018; Ye et al., 2019). Notably, Zhong et al. (2018) also generate distinguishing examples to facilitate translation, but they require a trained model to generate examples, and we organically derive examples from the structure of regexes without additional input. 8 Conclusion We introduce STRUCTUREDREGEX, a new dataset for regex synthesis from natural language and examples. Our dataset contains compositionally structured regexes paired with linguistically diverse language, and organically includes distinguishing examples. Better methods are needed to solve this dataset; we show that such methods might generalize well to real-world settings. Acknowledgments This work was partially supported by NSF Grant IIS-1814522, NSF Grant SHF-1762299, a gift from Arm, and an equipment grant from NVIDIA. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources used to conduct this research. Thanks as well to the anonymous reviewers for their helpful comments. References Rajeev Alur, Rastislav Bodik, Garvit Juniwal, Milo MK Martin, Mukund Raghothaman, Sanjit A 6090 Seshia, Rishabh Singh, Armando Solar-Lezama, Emina Torlak, and Abhishek Udupa. 2013. Syntaxguided Synthesis. In 2013 Formal Methods in Computer-Aided Design (FMCAD). Jacob Andreas, Dan Klein, and Sergey Levine. 2018. Learning with Latent Language. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL). Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic Parsing on Freebase from Question-Answer Pairs. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, and Pushmeet Kohli. 2017. Robustfill: Neural Program Learning under Noisy I/O. In Proceedings of the International Conference on Machine Learning (ICML). Yu Feng, Ruben Martins, Osbert Bastani, and Isil Dillig. 2018. Program Synthesis Using Conflict-driven Learning. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI). Jeffrey EF Friedl. 2006. Mastering Regular Expressions. ” O’Reilly Media, Inc.”. Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing (EMNLPIJCNLP). Sumit Gulwani. 2011. Automating String Processing in Spreadsheets Using Input-output Examples. In Proceedings of the ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL). Sumit Gulwani and Prateek Jain. 2017. Programming by Examples: PL Meets ML. In Proceedings of the Asian Symposium on Programming Languages and Systems (APLAS). Jonathan Herzig and Jonathan Berant. 2019. Don’t paraphrase, detect! Rapid and Effective Data Collection for Semantic Parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing (EMNLPIJCNLP). Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. Clevr: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Mark Johnson, Thomas L. Griffiths, and Sharon Goldwater. 2007. Adaptor Grammars: A Framework for Specifying Compositional Nonparametric Bayesian Models. In Proceedings of the Conference on Advances in Neural Information Processing Systems (NeurIPS). Nate Kushman and Regina Barzilay. 2013. Using Semantic Unification to Generate Regular Expressions from Natural Language. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NACCL). Mina Lee, Sunbeom So, and Hakjoo Oh. 2016. Synthesizing Regular Expressions from Examples for Introductory Automata Assignments. In Proceedings of the ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences (GPCE). Nicholas Locascio, Karthik Narasimhan, Eduardo DeLeon, Nate Kushman, and Regina Barzilay. 2016. Neural Generation of Regular Expressions from Natural Language with Minimal Domain Knowledge. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Reginald Long, Panupong Pasupat, and Percy Liang. 2016. Simpler Context-Dependent Logical Forms via Model Projections. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective Approaches to Attentionbased Neural Machine Translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Anders Møller. 2017. dk.brics.automaton – finitestate automata and regular expressions for Java. http://www.brics.dk/automaton/. Jekaterina Novikova, Oliver Lemon, and Verena Rieser. 2016. Crowd-sourcing NLG Data: Pictures Elicit Better Data. In Proceedings of the International Natural Language Generation conference (INLG). Maxwell Nye, Luke Hewitt, Joshua Tenenbaum, and Armando Solar-Lezama. 2019. Learning to Infer Program Sketches. In Proceedings of the International Conference on Machine Learning (ICML). Aarne Ranta. 1998. A Multilingual Natural-Language Interface to Regular Expressions. In Finite State Methods in Natural Language Processing. 6091 Abhilasha Ravichander, Thomas Manzini, Matthias Grabmair, Graham Neubig, Jonathan Francis, and Eric Nyberg. 2017. How Would You Say It? Eliciting Lexically Diverse Dialogue for Supervised Semantic Parsing. In Proceedings of the Annual SIGdial Meeting on Discourse and Dialogue (SIGDIAL). Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. 2017. A Corpus of Natural Language for Visual Reasoning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A Corpus for Reasoning about Natural Language Grounded in Photographs. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Xinyu Wang, Sumit Gulwani, and Rishabh Singh. 2016. FIDEX: Filtering Spreadsheet Data Using Examples. In Proceedings of the ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA). Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a Semantic Parser Overnight. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Navid Yaghmazadeh, Yuepeng Wang, Isil Dillig, and Thomas Dillig. 2017. SQLizer: Query Synthesis from Natural Language. In Proceedings of the ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA). Xi Ye, Qiaochu Chen, Xinyu Wang, Isil Dillig, and Greg Durrett. 2019. Sketch-Driven Regular Expression Generation from Natural Language and Examples. In arXiv preprint arXiv:1908.05848. Zexuan Zhong, Jiaqi Guo, Wei Yang, Tao Xie, JianGuang Lou, Ting Liu, and Dongmei Zhang. 2018. Generating regular expressions from natural language specifications: Are we there yet? In the Statistical Modeling of Natural Software Corpora Workshop at the AAAI Conference on Artificial Intelligence (AAAI Workshop). 6092 A Regex DSL Nonterminals r := startwith(r) r.* | endwith(r) .*r | contain(r) .*r.* | not(r) ∼r | optional(r) r? | star(r) r* | concat(r1, r2) r1r2 | and(r1, r2) r1&r2 | or(r1, r2) r1|r2 | rep(r,k) r{k} | repatleast(r,k) r{k, } | reprange(r,k1,k2) r{k1, k2} Terminals t := <let> [A-Za-z] | <cap> [A-Z] | <low> [a-z] | <num> [0-9] | <any> . | <spec> [-,;.+:!@# $%&*=ˆ] | <null> ∅ Table 7: Our regex DSL and the corresponding constructions in standard regular language. Our regex DSL is as expressive as and can be easily translated to standard regex syntax. B Details of Structured Grammar B.1 Grammar Rules See Figure 7. B.2 Implementation Details Intersection While building INTERSECTION regexes, we impose context-dependent constraints mainly to avoid combinations of regexes that are redundant or in conflict. Conflicts often occur between a ComposedBy constraint and the other constraints. A ComposedBy constraint indicates the allowed characters; e.g., repeatatleast(or(<let>,<spec>),1) means there can only be letters and special characters in the matched string. Therefore, when we already have such a constraint in the tree, we only allow the terminals to be selected from the valid subset of <let> and <spec> while expanding the other subtrees. This greatly reduce the chances of yielding empty regexes as well as redundant regexes (e.g., in and(repeatatleast(or(<let>,<spec>), 1),not(contain(<num>))), the second constraint is actually redundant). Concatenation CONCATENATION regexes are a sequence of simple components. As stated above, our grammar encourages the phenomenon of repetition that commonly occurs in real regexes by copying existing sub-trees. Separation SEPARATION regexes have several subfields, which can be specified by either INTERSECTION regexes or CONCATENATION regexes, and which are delimited by a constant. The fields of real regexes are often related, i.e., they share common components. For instance, the format of U.S. phone numbers is “xxx-xxx-xxxx” where “x” is a digit. Here the three fields are all digits but differ in length. Similar to the CONCATENATION template, we alter the distribution so as to copy the already generated subtrees. We also allow a class of SEPARATION with an arbitrary number of identical fields separated by a constant (e.g., a list of comma-separated numbers). Complexity Control We aim to create a collection of complicated regexes, but we do not wish to make them needlessly complex along unrealistic axes. We assess the complexity of generated regexes using a measure we call semantic complexity, which roughly measures how many factors would need to be specified by a user. Generally, each constraint or components counts for one degree of semantic complexity, e.g., not(contain(x)) and repeat(x,4) are of complexity level one. High-level macro constraints are of complexity level two since they need more verbal explanation. We limit the complexity degrees all of our generated regexes to be strictly no more than six. More details about the number of nodes and depth of our regexes can be found in Section 4. C HIT Example See Figure 8. 6093 Intersection Template IntTemp →Cons | and(Cons,IntTemp) Cons →BasicCons | LengthCons | MacroCons BasicCons →not(BasicCons) BasicCons →startwith(ConsExpr)|endwith(ConsExpr)| contain(ConsExpr) LengthCons →rep(<any>,k)| repatleast(<any>,k) |reprange(<any>,k,k) MacroCons →ConsistOfCons|AdvStartwithCons| AdvEndwithCons | CondContainCons ConsistOfCons →repatleast(LiteralSet,1) AdvStartwithCons →and(startwith(Literal),not(startwith(Literal))) AdvEndwithCons →and(endwith(Literal),not(endwith(Literal))) CondContainCons →not(contain(concat(Literal,notcc(Literal)))) CondContainCons →not(contain(concat(notcc(Literal),Literal))) ConsExpr →LiteralSet|MinConsExpr|concat(MinConsExpr,MinConsExpr) MinConsExpr →Literal|rep(Literal,k) Concatenation Template CatTemp →Comp, concat(Comp, CatTemp) Comp →optional(Comp) Comp →BasicComp| MacroComp BasicComp →CompExpr|rep(CompExpr,k)| repatleast(CompExpr,k) |reprange(CompExpr,k,k) MacroComp →or(rep(<Literal>,k),rep(<Literal>,k)) MacroComp →or(repatleast(<Literal>,k),repatleast(<Literal>,k)) MacroComp →or(reprange(<Literal>,k,k),reprange(<Literal>,k,k)) CompExpr →Literal|LiteralSet Separation Template SepTemp →concat(Seg,Delimiter,Seg,Delimiter,Seg) SepTemp →concat(Seg,star(concat(Delimiter,Seg)) Seg →IntTemp|CatTemp Delimiter →CONST Literals etc. Literal →CC | CONST | STR # CONST can be any const character, STR can be any string values. CC →<num>|<let>|<low>|<cap>|<spec> LiteralSet →Literal|or(Literal,LiteralSet) Figure 7: Grammar rules for generating regexes in our dataset. Our grammar contains much more rules than a standard regex grammar, and is highly structured in that we have high-level templates and macros. 6094 Instructions: In this task, you will be writing down descriptions of the patterns you see in a group of strings. For each HIT, you’ll be given a figure visually specifying a pattern and a few examples of strings following or not following the pattern to help you to understand it. Please write a description (generally 1-4 sentences) that describes the pattern. In addition, please write one additional string that follows the pattern. Things to keep in mind: • Please describe the pattern underlying the string examples, not the sequence of strings itself. Do not write things like “the first line ..., the second line ....” • Try to be precise about describing the pattern, but also concise. Don’t describe the same property of the strings in multiple ways. • You are not required to use the keywords in the figure. If you can think of another way to express the intent, that’s okay. • Please try to write natural and fluent sentences. • Additional string example must be different. Example strings that follow the pattern: a51,B457 a74,B23 a09,849 Example strings that do not follow the pattern: b55,B193 a7,B23 a09,1 Figure 8: HIT prompt for the description writing task. We particularly emphasize in the instructions that Turkers should use precise and original language.
2020
541
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6095–6104 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6095 Curriculum Learning for Natural Language Understanding Benfeng Xu1∗, Licheng Zhang1∗, Zhendong Mao1†, Quan Wang2 , Hongtao Xie1 and Yongdong Zhang1 1School of Information Science and Technology, University of Science and Technology of China, Hefei, China 2Beijing Research Institute, University of Science and Technology of China, Beijing, China {benfeng,zlczlc}@mail.ustc.edu.cn, [email protected] {zdmao,htxie,zhyd73}@ustc.edu.cn Abstract With the great success of pre-trained language models, the pretrain-finetune paradigm now becomes the undoubtedly dominant solution for natural language understanding (NLU) tasks. At the fine-tune stage, target task data is usually introduced in a completely random order and treated equally. However, examples in NLU tasks can vary greatly in difficulty, and similar to human learning procedure, language models can benefit from an easy-to-difficult curriculum. Based on this idea, we propose our Curriculum Learning approach. By reviewing the trainset in a crossed way, we are able to distinguish easy examples from difficult ones, and arrange a curriculum for language models. Without any manual model architecture design or use of external data, our Curriculum Learning approach obtains significant and universal performance improvements on a wide range of NLU tasks. 1 Introduction Natural Language Understanding (NLU), which requires machines to understand and reason with human language, is a crucial yet challenging problem. Recently, language model (LM) pre-training has achieved remarkable success in NLU. Pre-trained LMs learn universal language representations from large-scale unlabeled data, and can be simply finetuned with a few adjustments to adapt to various NLU tasks, showing consistent and significant improvements in these tasks (Radford et al., 2018; Devlin et al., 2018). While lots of attention has been devoted to designing better pre-training strategies (Yang et al., 2019; Liu et al., 2019; Raffel et al., 2019), it is also valuable to explore how to more effectively solve downstream NLU tasks in the fine-tuning stage. ∗Equal contribution. †Corresponding author. Easy cases: easy, comfortable positive most purely enjoyable positive most plain, unimaginative negative badly edited negative Hard cases: why didn’t Hollywood think of this sooner positive I simply can’t recommend it enough positive supposedly funny movie negative occasionally interesting negative Table 1: Examples from SST-2 sentiment classification task. Difficulty levels are determined by our review method (detailed later). Most current approaches perform fine-tuning in a straightforward manner, i.e., all training examples are treated equally and presented in a completely random order during training. However, even in the same NLU task, the training examples could vary significantly in their difficulty levels, with some easily solvable by simple lexical clues while others requiring sophisticated reasoning. Table 1 shows some examples from the SST-2 sentiment classification task (Socher et al., 2013), which identifies sentiment polarities (positive or negative) of movie reviews. The easy cases can be solved directly by identifying sentiment words such as “comfortable” and “unimaginative”, while the hard ones further require reasoning with negations or verb qualifiers like “supposedly” and “occasionally”. Extensive research suggests that presenting training examples in a meaningful order, starting from easy ones and gradually moving on to hard ones, would benefit the learning process, not only for humans but also for machines (Skinner, 1958; Elman, 1993; Peterson, 2004; Krueger and Dayan, 2009). Such an organization of learning materials in human learning procedure is usually referred to as Curriculum. In this paper, we draw inspiration from similar ideas, and propose our approach 6096 for arranging a curriculum when learning NLU tasks. Curriculum Learning (CL) is first proposed by (Bengio et al., 2009) in machine learning area, where the definition of easy examples is established ahead, and an easy-to-difficult curriculum is arranged accordingly for the learning procedure. Recent developments have successfully applied CL in computer vision areas (Jiang et al., 2017; Guo et al., 2018; Hacohen and Weinshall, 2019). It is observed in these works that by excluding the negative impact of difficult or even noisy examples in early training stage, an appropriate CL strategy can guide learning towards a better local minima in parameter space, especially for highly non-convex deep models. We argue that language models like transformer, which is hard to train (Popel and Bojar, 2018), should also benefit from CL in the context of learning NLU tasks, and such idea still remains unexplored. The key challenge in designing a successful CL strategy lies in how to define easy/difficult examples. One straightforward way is to simply predefine the difficulty in revised rules by observing the particular target task formation or training data structure accordingly (Guo et al., 2018; Platanios et al., 2019; Tay et al., 2019). For example, (Bengio et al., 2009) utilized an easier version of shape recognition trainset which comprised of less varied shapes, before the training of complex one started. More recently, (Tay et al., 2019) considered the paragraph length of a question answering example as its reflection of difficulty. However, such strategies are highly dependent on the target dataset itself and often fails to generalize to different tasks. To address this challenge, we propose our Cross Review method for evaluating difficulty. Specifically, we define easy examples as those well solved by the exact model that we are to employ in the task. For different tasks, we adopt their corresponding golden metrics to calculate a difficulty score for each example in the trainset. Then based on these difficulty scores, we further design a re-arranging algorithm to construct the learning curriculum in an annealing style, which provides a soft transition from easy to difficult for the model. In general, our CL approach is not constrained to any particular task, and does not rely on human prior heuristics about the task or dataset. Experimental results show that our CL approach can greatly help language models learn in their finetune stage. Without any task-tailored model architecture design or use of external data, we are able to obtain significant and universal improvements on a wide range of downstream NLU tasks. Our contributions can be concluded as follows: • We explore and demonstrate the effectiveness of CL in the context of finetuning LM on NLU tasks. To the best of our knowledge, this is one of the first times that CL strategy is proved to be extensively prospective in learning NLU tasks. • We propose a novel CL framework that consists of a Difficulty Review method and a Curriculum Arrangement algorithm, which requires no human pre-design and is very generalizable to a lot of given tasks. • We obtain universal performance gain on a wide range of NLU tasks including Machine Reading Comprehension (MRC) and Natural Language Inference. The improvements are especially significant on tasks that are more challenging. 2 Preliminaries We describe our CL approach using BERT (Devlin et al., 2018), the most influential pre-trained LM that achieved state-of-the-art results on a wide range of NLP tasks. BERT is pretrained using Masked Language Model task and Next sentence Prediction task via large scale corpora. It consists of a hierarchical stack of l self-attention layers, which takes an input of a sequence with no more than 512 tokens and output the contextual representation of a H-dimension vector for each token in position i, which we denote as hl i ∈RH. In natural language understanding tasks, the input sequences usually start with special token ⟨CLS⟩, and end with ⟨SEP⟩, for sequences consisting of two segments like in pairwise sentence tasks, another ⟨SEP⟩is added in between for separating usage. For target benchmarks, we employ a wide range of NLU tasks, including machine reading comprehension, sequence classification and pairwise text similarity, etc.. Following (Devlin et al., 2018), we adapt BERT for NLU tasks in the most straightforward way: simply add one necessary linear layer upon the final hidden outputs, then finetune the entire model altogether. Specifically, we brief the configurations and corresponding metrics for different tasks employed in our algorithms as follows: 6097 Machine Reading Comprehension In this work we consider the extractive MRC task. Given a passage P and a corresponding question Q, the goal is to extract a continuous span ⟨pstart, pend⟩ from P as the answer A, where the start and end are its boundaries. We pass the concatenation of the question and paragraph [⟨CLS⟩, Q, ⟨SEP⟩, P, ⟨SEP⟩] to the pretrained LM and use a linear classifier on top of it to predict the answer span boundaries. For the i −th input token, the probabilities that it is the start or end are calculated as: [logitstart i , logitend i ]T = WT MRChl i pstart i = softmax({logitstart i }) pend i = softmax({logitend i }) where WT MRC ∈R2×H is a trainable matrix. The training objective is the log-likelihood of the true start and end positions ystart and yend: loss = −(log(pstart ystart) + log(pend yend)) For unanswerable questions, the probability is calculated as sun = pstart cls + pend cls using ⟨CLS⟩representation. We classify a question into unanswerable when sun > si,j = maxi≤j(pstart i + pend j ). F1 is used as the golden metric. Sequence Classification We consider the final contextual embedding of ⟨CLS⟩token hl 0 as the pooled representation of the whole input sequence S. The probability that the input sequence belongs to label c is calculated by a linear output layer with parameter matrix WSC∈RK×Hfollowing a softmax: P(c|S) = softmax(hl 0WT SC), where K is the number of classes. The loglikelihood is also used as the training objective for this task. Accuracy is considered as the golden metric. Pairwise Text Similarity Similar to sequence classification task, final embedding of ⟨CLS⟩token hl 0 is used to represent the input text pair (T1, T2). A parameter vector WPTS ∈RH is introduced to compute the similarity score: Similarity(T1, T2) = hl 0WT PTS. For this task, we use Mean Squared Error (MSE) as the training objective and also the golden metric: MSE = (y −Similarity(T1, T2))2, where y is the similarity label in continuous score. Figure 1: Our Cross Review method: the target dataset is split into N meta-datasets, after the teachers are trained on them, each example will be inferenced by all other teachers (except the one it belongs to), the scores will be summed as the final evaluation results. 3 Our CL Approach We decompose our CL framework into two stages: Difficulty Evaluation and Curriculum Arrangement. For any target task, let D be the examples set used for training, and Θ be our language model which is expected to fit D. In the first stage, the goal is to assign each example dj in D with a score cj which reflects its difficulty with respect to the model. We denote C as the whole difficulty score set corresponding to trainset D. In the second stage, based on these scores, D is organized into a sequence of ordered learning stages {Si : i = 1, 2, . . . , N} with an easy-to-difficult fashion, resulting in the final curriculum where the model will be trained on. We will elaborate these two stages in section 3.1 and 3.2 respectively. 3.1 Difficulty Evaluation The difficulty of a textual example reflects itself in many ways, e.g., the length of the context, the usage of rare words, or the scale of learning target. Although such heuristics seems reasonable to human, the model itself may not see it the same way. So we argue that difficulty score as the intrinsic properties of an example should be decided by the model itself, and the best metric should be the golden metric of the target task, which can be accuracy, F1 score, etc., as we introduced in section 2. To perform difficulty evaluation, we first scatter our trainset D into N shares uniformly as { eDi : i = 1, 2, . . . , N}, and train N corresponding models {eΘi : i = 1, 2, . . . , N} on them, which are all identical to Θ (note that each model eΘi will only see 1/N of the entire trainset). We refer to these N models as teachers, and { eDi} as metadatasets for that they are attended only to collect 6098 information (i.e. the extent of difficulty) about the original trainset D. This preparing of teacher can be formulated as: eΘi = argmin eΘi X dj∈e Di L(dj, eΘi) i = 1, 2, . . . , N where L indicates the loss function. After every teacher is respectively trained on its meta-dataset, the evaluation of trainset D should begin. For each example dj, it should be included in one and only one meta-dataset, let’s assume it’s eDk, then we perform inference of dj on all teachers except teacher k, because the inference from teacher k is supposed to be isolated with the meta-dataset eDk it has already seen during training. After all inferences finished, we calculate scores of dj in the target task’s metric, resulting N −1 scores from N −1 different teachers: cji = M(eΘi(xj), yj) where eΘi(•) represents the inference function, xj and yj is the input and label of example dj respectively, M is the metric calculation formula, which can be either F1, Accuracy or MSE for different tasks as introduced in section 2, and cji is the score of dj from teacher eΘi. Finally, we define the difficulty score of dj as the integration of all N −1 scores: cj = X i∈(1,...,N), i̸=k cji with all scores calculated, we obtain the final difficulty score set C as desired. We refer to our difficulty evaluation method as Cross Review(see Fig. 1) In the proposed method, the teacher models perform their inferences in a crossed way, which prevents the meta-dataset from contaminating the inference set. Besides, each example gets its score from multi teachers, thus the fluctuation of evaluation results is greatly alleviated. In general, our Cross Review method can address the difficulty evaluation problem in an elegant design. 3.2 Curriculum Arrangement In this section we describe our method to arrange the training examples D into a learning curriculum according to their difficulty scores C. We design our curriculum in a multi-stage setting {Si : i = 1, 2, . . . , N}. Within each stage Si, the examples are still shuffled to keep local stochastics, and examples from different stages do not overlap in order to prevent overfitting. The sampling algorithm is built upon such principle: The proportion of difficult examples in each stage should start with 0, and gradually increase until it reachs how much it accounts for in the original dataset distribution. We first sort all examples by their difficulty score C, and divide them into N buckets: {Ci : i = 1, 2, . . . , N}, so the examples are now collected into N different levels of difficulty, ranging from C1 (the easiest) to CN (the hardest), with the proportion distribution as: num(C1) : num(C2) : · · · : num(CN) For tasks with discrete metrics, such distribution is naturally formed by the difficulty score hierarchy, and directly reflects the intrinsic difficulty distribution of the dataset. For other tasks, we manually divide C uniformly1. Based on these buckets, we construct the learning curriculum one stage after another. For each learning stage Si, we sample examples from all antecedent buckets {Cj : j = 1, 2, . . . , i} by the following proportion: 1 N num(C1) : 1 N num(C2) : · · · : 1 N num(Ci) and the final curriculum {Si : i = 1, 2, . . . , N} is formed as such. We refer to the arrangement algorithm as Annealing method for it provides a soft transition through multi learning stages. At each stage, the model is trained for one epoch. When the training reached SN, the model should be ready for the original distribution in trainset D, so we finally add another stage SN+1 which covers the entire trainset, and the model is trained on it until converges. 4 Experiments 4.1 Datasets In this section we briefly describe three popular NLU benchmarks on which we evaluate our CL approach: SQuAD 2.0 (Rajpurkar et al., 2018), NewsQA (Trischler et al., 2016) and GLUE (Wang et al., 2018), their scale and metrics are detailed in Table 2. 1Please refer to our implementation detail for selected tasks in section 4.2 6099 SQuAD2.0 NewsQA MNLI-m QNLI QQP RTE SST-2 MRPC CoLA STS-B Train 130.3k 92.5k 392.7k 104.7k 363.8k 2.5k 67.3k 3.7k 8.6k 5.7k Dev 11.9k 5.2k 9.8k 5.5k 40.4k 277 872 408 1.0k 1.5k Test 8.9k 5.1k 9.8k 5.5k 39.1k 3.0k 1.8k 1.7k 1.0k 1.4k Metrics F1/EM F1/EM Accuracy Accuracy Accuracy Accuracy Accuracy F1 Matthew Pearson Table 2: The number of training, development, test examples and metrics of tasks used in this work. SQuAD The Stanford Question Answering Dataset (SQuAD), constructed using Wikipedia articles, is a well known extractive machine reading comprehension dataset with two tasks: SQuAD1.1 (Rajpurkar et al., 2016) and SQuAD 2.0 (Rajpurkar et al., 2018). The latest 2.0 version also introduced unanswerable questions, making it a more challenging and practical task. In this paper, We take SQuAD 2.0 as our testbed. NewsQA NewsQA (Trischler et al., 2016) is also a MRC dataset in extractive style but is much more challenging, with human performance at 0.694 F1 score. NewsQA is collected from news articles of CNN with two sets of crowdworkers, the ”questioners” is provided with the article’s headline only, and ”answerers” is supposed to find the answer in full article. We ignore examples flagged to be without annotator agreement for better evaluation following (Fisch et al., 2019). GLUE The General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018) is a collection of nine2 diverse sentence or sentence pair language understanding tasks including sentiment analysis, textual entailment, sentence similarity, etc. It is considered as a well-designed benchmark that can evaluate the generalization and robustness of NLU algorithms. The labels for GLUE test set is hidden, and users must upload their predictions to obtain evaluation results, the submission is limited to protect test set from overfitting. 4.2 Experimental Setups We use BERT Large (Devlin et al., 2018) as our pre-trained language model to demonstrate the ef2The benchmark consists of: Multi-Genre NLI (MNLI) (Williams et al., 2018), Quora Question Pairs (QQP) (Shankar Iyer, 2016), Question NLI (QNLI) (Rajpurkar et al., 2016), Stanford Sentiment Treebank (SST) (Socher et al., 2013), Corpus of Linguistic Acceptability (CoLA) (Warstadt et al., 2019), Semantic Textual Similarity Benchmark (STS-B) (Cer et al., 2017), Microsoft Research Paragraph Corpus (MRPC) (Dolan and Brockett, 2005), Recognizing Textual Entailment (RTE) (Bentivogli et al., 2009), Winograd NLI (WNLI) (Levesque et al., 2012) Method SQuAD 2.0 NewsQA EM F1 EM F1 BERT Base 73.66 76.30 BERT Base∗ 73.66 76.78 47.70 60.10 BERT Base+CL 74.96 77.93 47.72 60.57 BERT Large 78.98 81.77 BERT Large∗ 79.12 82.09 50.40 64.12 BERT Large+CL 79.43 82.66 50.50 64.42 Table 3: Results on SQuAD 2.0 and NewsQA, all on development sets. Baseline on SQuAD 2.0 is obtained from (Yang et al., 2019), ∗indicates our reimplementation. fectiveness of our CL approach. For MRC, we also test on BERT Base model for more comprehensive results. Besides reported results from literature, we also provide our re-implementation on all datasets, which form a more competitive baseline for comparison. The only difference between our re-implementation and our CL approach is the arrangement of curriculum, i.e., the order of training examples. To obtain a more comparable and stable difficulty score, we binarize the review results before sum them together if possible. For accuracy as metric, the score cji is already binary in instance level, for F1 as metric, we count any review result cji > 0 as correct. For other continuous metrics (MSE in this paper), we sum cji directly. We empirically choose N = 10 as the number of meta-datasets for most tasks (also the number of difficulty level and the number of stages), for three datasets with rather limited scale (RTE, MRPC, and STS-B), we change it to N = 3. The scale of all datasets employed in this work is provided in Table 2. Intuitively, we shall get better results by searching for the best N, we leave it to future works due to limited computation resource. We implement our approach based on the PyTorch implementation of BERT (Wolf et al., 2019). We use Adam (Kingma and Ba, 2014) optimizer 6100 MNLI-m QNLI QQP RTE SST-2 MRPC CoLA STS-B Avg results on dev BERT Large 86.6 92.3 91.3 70.4 93.2 88.0 60.6 90.0 84.1 BERT Large∗ 86.6 92.5 91.5 74.4 93.8 91.7 63.5 90.2 85.5 BERT Large+CL 86.6 92.8 91.8 76.2 94.2 91.9 66.8 90.6 86.4 results on test BERT Large 86.7 91.1 89.3 70.1 94.9 89.3 60.5 87.6 83.7 BERT Large∗ 86.3 92.2 89.5 70.2 94.4 89.3 60.5 87.3 83.7 BERT Large+CL 86.7 92.5 89.5 70.7 94.6 89.6 61.5 87.8 84.1 Table 4: Results on GLUE benchmark, ∗indicates our re-implementation, baselines on dev sets are obtained from (Liu et al., 2019), baselines on test sets are obtained from the leaderboard (https://gluebenchmark. com/leaderboard) submitted by (Devlin et al., 2018), they may have taken different hyperparmeters. All results are produced with single task and single model. with eplison equals to 1e-8. The learning rates warm up over the first 5% steps and then decay linearly to 0 for all experiments. To construct our reimplementation, on both SQuAD 2.0 and NewsQA we perform hyperparameter search with batch size in {16, 32} and learning rate in {1e-5, 2e-5, 3e-5, 4e-5} for Base model, and {32, 48, 64}, {5e-5, 6e-5, 7e-5} for Large model. We reuse the best parameter setting in SQuAD 2.0 on NewsQA. We set the max length of input sequence to 512 for NewsQA task because the paragraph is much more longer. On GLUE, we implement the experiments on Large model with batch size in {16, 32} and learning rate in {1e-5, 2e-5, 3e-5}. 4.3 MRC Results The results for MRC tasks are presented in Table 3. In all experiments, our CL approach outperforms its baseline with considerable margin. On SQuAD 2.0, we obtain +1.30 EM/+1.15 F1 improvements using base model and +0.31 EM/+0.57 F1 using large model compare to our competitive re-implemented baseline. Note that the performance gain is more significant with Base model. On NewsQA, we also get +0.02 EM/+0.47 F1 and +0.10 EM/+0.30 F1 improvements for base and large model respectively. 4.4 GLUE Results We summarize our GLUE results in Table 4. Results on dev sets show that our CL method consistently outperforms their competitive baseline on all 8 tasks, which proves that our CL is not only robustly effective but also generalizable on a wide range of NLU tasks. Because the model architecture and hyper-parameters setting are identical, all the performance gains can be attributed to our CL approach alone. Specifically, we observe that our CL approach is doing better on more challenging tasks. For CoLA and RTE, the margin is up to +3.3 and +1.8 in respective metrics, which is relatively larger than less challenging tasks where the model performance already reached a plateau. Such results are understandable: when learning harder tasks, the model can be overwhelmed by very difficult examples at early stages, and a well-arranged curriculum thus can be more helpful. And for tasks where the baselines are already approaching the human performance like SST-2, our CL approach is still able to provide another +0.4 improvements, which demonstrates the robustness of our approach. Overall, our CL approach obtains +0.9 average score gain on GLUE benchmark compare to our re-implemented baseline. Results on test sets further demonstrate the effectiveness of our approach. We obtain +0.4 average score gain compare to our re-implementation and the baseline on the leaderboard. 4.5 Ablation Study In this section, we delve into our approach on a series of interesting topics including: (i) what is the best CL design strategy for NLU tasks, (ii) can Cross Review really distinguish easy examples from difficult ones, (iii) the best choice of N. We choose SQuAD 2.0 task in most experiments for generality, and all experiments are performed with BERT Base model. Comparison with Heuristic CL Methods To demonstrate our advantage over manually designed CL methods, we compare our approach with sev6101 Method SQuAD 2.0 ∆ EM F1 No Curriculum 76.30 No Curriculum∗ 73.66 76.78 Rarity+Annealing 73.75 76.90 +0.12 Answer+Annealing 74.02 77.15 +0.37 Question+Annealing 74.35 77.37 +0.59 Paragraph+Annealing 74.45 77.54 +0.76 Cross-Review+Naive order 74.31 77.29 +0.51 Cross-Review+Annealing 74.96 77.93 +1.15 Table 5: Comparisions with heuristic CL design (written in italic). ∗indicates our re-implementation, ∆indicates absolute improvements on F1. Figure 2: Statistical illustration on different levels of difficulty examples in SQuAD 2.0. Four line respectively indicate average answer length, question length, paragraph length and the proportion of unanswerable examples with respect to level of difficulty. Bar indicates the number of examples in each bucket. Best viewed in color mode. eral heuristic curriculum design in Table 5. For Difficulty Review methods, we adopt word rarity, answer length, question length, and paragraph length as difficulty metrics similar to (Tay et al., 2019; Platanios et al., 2019). We calculate word rarity as the average word frequency of the question, where the frequency is count from all questions in trainset. We define difficult examples as those with lower words frequencies, longer answer, question, and paragraph length. We first sort all examples using these metrics, and divide them evenly to obtain 10 example buckets with a corresponding level of difficulty, and the Curriculum Arrangement strategy remains unchanged as Annealing. For Curriculum Arrangement method, we try Naive order for comparison. We directly implement the curriculum as {Ci} (instead of {Si}) without any sampling algorithm, only that SN+1 is still retained for fair comparison. In the meantime, the Difficulty Evaluation method remains unchanged as Cross Review. The results show that these intuitive design indeed works well with various improvements ranging from +0.12 to +0.76 on F1 score. But they are all outperformed by our Cross Review + Annealing approach. Case study: Easy VS Difficult In our CrossReview method, the dataset was divided into N buckets {Ci} with different levels of difficulty. Here we further explore what do these easy/difficult examples in various tasks actually look like. Earlier in the introduction (see Table 1), we have provided a straightforward illustration of easy cases versus hard cases in SST-2 dataset. Among ten different levels of difficulty, these cases are sampled from the most easy bucket (C1) and the most difficult bucket (C10), respectively. The results are very clear and intuitive. We further choose SQuAD 2.0 as a more complex task to perform in-depth analysis. Under the N = 10 setting, we reveal the statistical distinctions of all buckets {Ci} in Fig 2. With three monotonically increasing curve, it is very clear that difficult examples tend to entail longer paragraph, longer questions, and longer answers. Such conclusions conforms to our intuition that longer text usually involves more complex reasoning patterns and context-dependency. And these challenging examples are now successfully excluded in the early stages attributing to our CL approach. Another interesting result is that the percentage of unanswerable examples drops consistently from 40% to 20% along the difficulty axis. We assume that simply doing classification is easier than extracting the exact answer boundaries. On Different Settings of N One argument that needs to be specified ahead in our approach is N, which decides the number of meta-datasets, learning stages, and also the granularity of our difficulty score. Assume the metric is between 0 and 1, which fits almost all the cases, then the difficulty score cji should range from 0 (when all teacher models fail) to N −1 (when all teacher models succeed), so all examples can be distinguished into N different levels. With N becoming larger, the granularity is also finer. To examine the impact of different settings, we perform ablation study on SQuAD 2.0 task given a wide range of choices: from 2 to 20 (see Fig 3). It is obvious that under all settings our approach 6102 Figure 3: F1 score on SQuAD 2.0 with respect to N. Dotted line is the baseline, solid line is our reimplementation, best viewed in color mode. outperforms the baseline by at least +0.5 F1 score (even including N = 2, where the difficulty evaluation results may be affected by the fluctuation of single-teacher review). We also experiment with extremely large N value. For N = 100, the result is 74.10 on F1 score (2.68 below our baseline), which is as expected because the meta-dataset is too small to prepare a decent teacher that is capable of evaluating. In general, our approach is very robust with the settings of N. 5 Related Works The idea of training a neural network in an easyto-difficult fashion can be traced back to (Elman, 1993). (Krueger and Dayan, 2009) revisited the idea from a cognitive perspective with the shaping procedure, in which a teacher decomposes a complete task into sub-components. Based on these works, Curriculum Learning is first proposed in (Bengio et al., 2009). They designed several toy experiments to demonstrate the benefits of curriculum strategy both in image classification and language modeling. They also propose that curriculum can be seen as a sequence of training criteria, and at the end of it, the reweighting of examples should be uniform with the target distribution, which inspired the design of our Curriculum Arrangement algorithm. Although CL has been successfully applied to many areas in computer vision (Supancic and Ramanan, 2013; Chen and Gupta, 2015; Jiang et al., 2017), it was not introduced to solve NLU tasks until (Sachan and Xing, 2016). By experimenting with several heuristics, they migrated the success of CL (Kumar et al., 2010) to machine reading comprehension tasks. (Sachan and Xing, 2018) further extended this work to question generation. More recently, (Tay et al., 2019) employed CL strategy to solve reading comprehension over long narratives. Apart from them, there aren’t very many works that discuss CL in the context of NLU to the best of our knowledge. On the methodology of designing CL algorithms, our approach is closely related to (Guo et al., 2018; Wang et al., 2019; Platanios et al., 2019; Tay et al., 2019), where a curriculum is formed via two steps: evaluating the difficulty first, then sampling the examples into batches accordingly. For different target tasks, the evaluation methods also vary greatly. (Guo et al., 2018) first examined the examples in their feature space, and define difficulty by the distribution density, which successfully distinguished noisy images. (Wang et al., 2019) incorporated category information into difficulty metric to address imbalanced data classification. In language tasks, (Platanios et al., 2019) and (Tay et al., 2019) propose to consider the length of context as extent of difficulty. Another line of works see curriculum construction as an optimization problem (Kumar et al., 2010; Graves et al., 2017; Fan et al., 2018), which usually involves sophisticated design and is quite different from our approach. 6 Conclusion In this work we proposed a novel Curriculum Learning approach which does not rely on human heuristics and is simple to implement. With the help of such a curriculum, language models can significantly and universally perform better on a wide range of downstream NLU tasks. In the future, we look forward to extend CL strategy to the pretraining stage, and guide deep models like transformer from a language beginner to a language expert. Acknowledgments We thank all anonymous reviewers for their valuable comments. This work is supported by the National Natural Science Foundation of China, Grant No.U19A2057, No.61876223, the National Science Fund for Distinguished Young Scholars No.61525206, the Fundamental Research Funds for the Central Universities, Grant No.WK3480000008, and the grant of Tianjin New Generation Artificial Intelligence Major Program No.19ZXZNGX00110. 6103 References Yoshua Bengio, J´erˆome Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48. ACM. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. In TAC. Daniel Cer, Mona Diab, Eneko Agirre, Inigo LopezGazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14. Xinlei Chen and Abhinav Gupta. 2015. Webly supervised learning of convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 1431–1439. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. William B Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Jeffrey L Elman. 1993. Learning and development in neural networks: The importance of starting small. Cognition, 48(1):71–99. Yang Fan, Fei Tian, Tao Qin, Xiang-Yang Li, and TieYan Liu. 2018. Learning to teach. arXiv preprint arXiv:1805.03643. Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. 2019. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In Proceedings of 2nd Machine Reading for Reading Comprehension (MRQA) Workshop at EMNLP. Alex Graves, Marc G Bellemare, Jacob Menick, Remi Munos, and Koray Kavukcuoglu. 2017. Automated curriculum learning for neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1311–1320. JMLR. org. Sheng Guo, Weilin Huang, Haozhi Zhang, Chenfan Zhuang, Dengke Dong, Matthew R Scott, and Dinglong Huang. 2018. Curriculumnet: Weakly supervised learning from large-scale web images. In Proceedings of the European Conference on Computer Vision (ECCV), pages 135–150. Guy Hacohen and Daphna Weinshall. 2019. On the power of curriculum learning in training deep networks. arXiv preprint arXiv:1904.03626. Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. 2017. Mentornet: Learning datadriven curriculum for very deep neural networks on corrupted labels. arXiv preprint arXiv:1712.05055. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Kai A Krueger and Peter Dayan. 2009. Flexible shaping: How learning in small steps helps. Cognition, 110(3):380–394. M Pawan Kumar, Benjamin Packer, and Daphne Koller. 2010. Self-paced learning for latent variable models. In Advances in Neural Information Processing Systems, pages 1189–1197. Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Gail B Peterson. 2004. A day of great illumination: Bf skinner’s discovery of shaping. Journal of the experimental analysis of behavior, 82(3):317–328. Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabas Poczos, and Tom M Mitchell. 2019. Competence-based curriculum learning for neural machine translation. arXiv preprint arXiv:1903.09848. Martin Popel and Ondˇrej Bojar. 2018. Training tips for the transformer model. The Prague Bulletin of Mathematical Linguistics, 110(1):43–70. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openaiassets/researchcovers/languageunsupervised/language understanding paper. pdf. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. 6104 Mrinmaya Sachan and Eric Xing. 2016. Easy questions first? a case study on curriculum learning for question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 453–463. Mrinmaya Sachan and Eric Xing. 2018. Self-training for jointly learning to ask and answer questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 629–640. Kornl Csernai Shankar Iyer, Nikhil Dandekar. 2016. Diy corpora: the www and the translator. Burrhus F Skinner. 1958. Reinforcement today. American Psychologist, 13(3):94. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631–1642. James S Supancic and Deva Ramanan. 2013. Selfpaced learning for long-term tracking. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2379–2386. Yi Tay, Shuohang Wang, Luu Anh Tuan, Jie Fu, Minh C Phan, Xingdi Yuan, Jinfeng Rao, Siu Cheung Hui, and Aston Zhang. 2019. Simple and effective curriculum pointer-generator networks for reading comprehension over long narratives. arXiv preprint arXiv:1905.10847. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2016. Newsqa: A machine comprehension dataset. arXiv preprint arXiv:1611.09830. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Yiru Wang, Weihao Gan, Jie Yang, Wei Wu, and Junjie Yan. 2019. Dynamic curriculum learning for imbalanced data classification. In Proceedings of the IEEE International Conference on Computer Vision, pages 5017–5026. Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625–641. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237.
2020
542
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6105–6117 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6105 Do Neural Models Learn Systematicity of Monotonicity Inference in Natural Language? Hitomi Yanaka1, Koji Mineshima2, Daisuke Bekki3, and Kentaro Inui4,1 1RIKEN, 2Keio University, 3Ochanomizu University, 4Tohoku University [email protected], [email protected], [email protected], [email protected] Abstract Despite the success of language models using neural networks, it remains unclear to what extent neural models have the generalization ability to perform inferences. In this paper, we introduce a method for evaluating whether neural models can learn systematicity of monotonicity inference in natural language, namely, the regularity for performing arbitrary inferences with generalization on composition. We consider four aspects of monotonicity inferences and test whether the models can systematically interpret lexical and logical phenomena on different training/test splits. A series of experiments show that three neural models systematically draw inferences on unseen combinations of lexical and logical phenomena when the syntactic structures of the sentences are similar between the training and test sets. However, the performance of the models significantly decreases when the structures are slightly changed in the test set while retaining all vocabularies and constituents already appearing in the training set. This indicates that the generalization ability of neural models is limited to cases where the syntactic structures are nearly the same as those in the training set. 1 Introduction Natural language inference (NLI), a task whereby a system judges whether given a set of premises P semantically entails a hypothesis H (Dagan et al., 2013; Bowman et al., 2015), is a fundamental task for natural language understanding. As with other NLP tasks, recent studies have shown a remarkable impact of deep neural networks in NLI (Williams et al., 2018; Wang et al., 2019; Devlin et al., 2019). However, it remains unclear to what extent DNN-based models are capable of learning the compositional generalization underlying NLI from given labeled training instances. Systematicity of inference (or inferential systematicity) (Fodor and Pylyshyn, 1988; Aydede, 1997) in natural language has been intensively studied in the field of formal semantics. From among the various aspects of inferential systematicity, in the context of NLI, we focus on monotonicity (van Benthem, 1983; Icard and Moss, 2014) and its productivity. Consider the following premise–hypothesis pairs (1)–(3), which have the target label entailment: (1) P: Some [puppies ↑] ran. H: Some dogs ran. (2) P: No [cats ↓] ran. H: No small cats ran. (3) P: Some [puppies which chased no [cats ↓]] ran. H: Some dogs which chased no small cats ran. As in (1), for example, quantifiers such as some exhibit upward monotone (shown as [... ↑]), and replacing a phrase in an upward-entailing context in a sentence with a more general phrase (replacing puppies in P with dogs as in H) yields a sentence inferable from the original sentence. In contrast, as in (2), quantifiers such as no exhibit downward monotone (shown as [... ↓]), and replacing a phrase in a downward-entailing context with a more specific phrase (replacing cats in P with small cats as in H) yields a sentence inferable from the original sentence. Such primitive inference patterns combine recursively as in (3). This manner of monotonicity and its productivity produces a potentially infinite number of inferential patterns. Therefore, NLI models must be capable of systematically interpreting such primitive patterns and reasoning over unseen combinations of patterns. Although many studies have addressed this issue by modeling logical reasoning in formal semantics (Abzianidze, 2015; Mineshima et al., 2015; Hu et al., 2019) and testing DNN-based models on monotonicity inference (Yanaka et al., 2019a,b; Richardson et al., 6106 Systematicity Train 1 : Fix a quantifier and feed various predicate replacements. Some dogs ran Some puppies ran Some wild dogs ran Some dogs in the park ran LEX ADJ PREP Train 2 : Fix a predicate replacement and feed various quantifiers. Several puppies ran Several dogs ran No dog ran No puppy ran LEX LEX Test : Unseen combinations of quantifiers and predicate replacements Several dogs ran Several wild dogs ran Several dogs in the park ran ADJ PREP No dog ran No wild dog ran No dog in the park ran ADJ PREP Productivity Train 1 : Depth 1 Some dogs ran Some puppies ran {LEX,ADJ,PREP, . . .} Train 2 : Depth 2 Some dogs which chased some dogs ran Some dogs which chased some puppies ran {LEX,ADJ,PREP, . . .} Test : Unseen depths Some dogs which chased some dogs which followed some dogs ran Some dogs which chased some dogs which followed some puppies ran {LEX,ADJ,PREP, . . .} Figure 1: An illustration of the basic idea. For Systematicity and Productivity, we train models on the Train 1 and Train 2 sets and test them on the Test set. Arrow ( ) means entailment relation; LEX, ADJ, and PREP mean predicate replacements for lexical relations, adjectives, and prepositional phrases, respectively. In Productivity, we use various quantifiers and predicate replacements in each depth. 2020), the ability of DNN-based models to generalize to unseen combinations of patterns is still underexplored. Given this background, we investigate the systematic generalization ability of DNN-based models on four aspects of monotonicity: (i) systematicity of predicate replacements (i.e., replacements with a more general or specific phrase), (ii) systematicity of embedding quantifiers, (iii) productivity, and (iv) localism (see Section 2.2). To this aim, we introduce a new evaluation protocol where we (i) synthesize training instances from sampled sentences and (ii) systematically control which patterns are shown to the models in the training phase and which are left unseen. The rationale behind this protocol is twofold. First, patterns of monotonicity inference are highly systematic, so we can create training data with arbitrary combinations of patterns, as in examples (1)–(3). Second, evaluating the performance of the models trained with well-known NLI datasets such as MultiNLI (Williams et al., 2018) might severely underestimate the ability of the models because such datasets tend to contain only a limited number of training instances that exhibit the inferential patterns of interest. Furthermore, using such datasets would prevent us from identifying which combinations of patterns the models can infer from which patterns in the training data. This paper makes two primary contributions. First, we introduce an evaluation protocol1 using 1The evaluation code will be publicly available at https://github.com/verypluming/systematicity. the systematic control of the training/test split under various combinations of semantic properties to evaluate whether models learn inferential systematicity in natural language. Second, we apply our evaluation protocol to three NLI models and present evidence suggesting that, while all models generalize to unseen combinations of lexical and logical phenomena, their generalization ability is limited to cases where sentence structures are nearly the same as those in the training set. 2 Method 2.1 Basic idea Figure 1 illustrates the basic idea of our evaluation protocol on monotonicity inference. We use synthesized monotonicity inference datasets, where NLI models should capture both (i) monotonicity directions (upward/downward) of various quantifiers and (ii) the types of various predicate replacements in their arguments. To build such datasets, we first generate a set of premises GQ d by a context-free grammar G with depth d (i.e., the maximum number of applications of recursive rules), given a set of quantifiers Q. Then, by applying GQ d to elements of a set of functions for predicate replacements (or replacement functions for short) R that rephrase a constituent in the input premise and return a hypothesis, we obtain a set DQ,R d of premise–hypothesis pairs defined as DQ,R d = {(P, H) | P ∈GQ d , ∃r ∈R (r(P) = H)}. For example, the premise Some puppies ran is generated from the quantifier some in Q and 6107 the production rule S →Q, N, IV, and thus it is an element of GQ 1 . By applying this premise to a replacement function that replaces the word in the premise with its hypernym (e.g., puppy ⊑ dog), we provide the premise–hypothesis pair Some puppies ran ⇒Some dogs ran in Fig. 1. We can control which patterns are shown to the models during training and which are left unseen by systematically splitting DQ,R d into training and test sets. As shown on the left side of Figure 1, we consider how to test the systematic capacity of models with unseen combinations of quantifiers and predicate replacements. To expose models to primitive patterns regarding Q and R, we fix an arbitrary element q from Q and feed various predicate replacements into the models from the training set of inferences D{q},R d generated from combinations of the fixed quantifier and all predicate replacements. Also, we select an arbitrary element r from R and feed various quantifiers into the models from the training set of inferences DQ,{r} d generated from combinations of all quantifiers and the fixed predicate replacement. We then test the models on the set of inferences generated from unseen combinations of quantifiers and predicate replacements. That is, we test them on the set of inferences D{q},{r} d generated from the complements {q}, {r} of {q}, {r}. If models capture inferential systematicity in combinations of quantifiers and predicate replacements, they can correctly perform all inferences in D{q},{r} d on an arbitrary split based on q, r. Similarly, as shown on the right side of Figure 1, we can test the productive capacity of models with unseen depths by changing the training/test split based on d. For example, by training models on DQ,R d and testing them on DQ,R d+1, we can evaluate whether models generalize to one deeper depth. By testing models with an arbitrary training/test split of DQ,R d based on semantic properties of monotonicity inference (i.e., quantifiers, predicate replacements, and depths), we can evaluate whether models systematically interpret them. 2.2 Evaluation protocol To test NLI models from multiple perspectives of inferential systematicity in monotonicity inferences, we focus on four aspects: (i) systematicity of predicate replacements, (ii) systematicity of embedding quantifiers, (iii) productivity, and (iv) localism. For each aspect, we use a set DQ,R d of premise–hypothesis pairs. Let Q = Q↑∪Q↓ be the union of a set of selected upward quantifiers Q↑and a set of selected downward quantifiers Q↓such that |Q↑| = |Q↓| = n. Let R be a set of replacement functions {r1, . . . , rm}, and d be the embedding depth, with 1 ≤d ≤s. (4) is an example of an element of DQ,R 1 , containing the quantifier some in the subject position and the predicate replacement using the hypernym relation dogs ⊑animals in its upward-entailing context without embedding. (4) P: Some dogs ran ⇒ H: Some animals ran I. Systematicity of predicate replacements The following describes how we test the extent to which models generalize to unseen combinations of quantifiers and predicate replacements. Here, we expose models to all primitive patterns of predicate replacements like (4) and (5) and all primitive patterns of quantifiers like (6) and (7). We then test whether the models can systematically capture the difference between upward quantifiers (e.g., several) and downward quantifiers (e.g., no) as well as the different types of predicate replacements (e.g., the lexical relation dogs ⊑animals and the adjective deletion small dogs ⊑dogs) and correctly interpret unseen combinations of quantifiers and predicate replacements like (8) and (9). (5) P: Some small dogs ran ⇒ H: Some dogs ran (6) P: Several dogs ran ⇒ H: Several animals ran (7) P: No animals ran ⇒ H: No dogs ran (8) P: Several small dogs ran ⇒ H: Several dogs ran (9) P: No dogs ran ⇒ H: No small dogs ran Here, we consider a set of inferences DQ,R 1 whose depth is 1. We move from harder to easier tasks by gradually changing the training/test split according to combinations of quantifiers and predicate replacements. First, we expose models to primitive patterns of Q and R with the minimum training set. Thus, we define the initial training set S1 and test set T1 as follows: (S1, T1) = (D{q},R 1 ∪DQ,{r} 1 , D{q},{r} 1 ) where q is arbitrarily selected from Q, and r is arbitrarily selected from R. Next, we gradually add the set of inferences generated from combinations of an upward– downward quantifier pair and all predicate replacements to the training set. In the examples above, we add (8) and (9) to the training set to simplify the task. We assume a set Q′ of a pair of upward/downward quantifiers, namely, {(q↑, q↓) | (q↑, q↓) ⊆Q↑× Q↓, q↑, q↓̸= q}. We consider 6108 a set perm(Q′) consisting of permutations of Q′. For each p ∈perm(Q′), we gradually add a set of inferences generated from p(i) to the training set Si with 1 < i ≤n −1. Then, we provide a test set Ti generated from the complement Qi of Qi = {x | ∃y(x, y) ∈Q′ i or ∃y(y, x) ∈Q′ i} and {r} where Q′ i = {p(1), . . . , p(i)}. This protocol is summarized as Si+1 = Si ∪D {q↑ i ,q↓ i },R 1 , Ti = DQi,{r} 1 with 1 < i ≤n −1 where (q↑ i , q↓ i ) = p(i). To evaluate the extent to which the generalization ability of models is robust for different syntactic structures, we use an additional test set T′ i = DQi,{r} 1 generated using three production rules. The first is the case where one adverb is added at the beginning of the sentence, as in example (10). (10) Padv: Slowly, several small dogs ran Hadv: Slowly, several dogs ran The second is the case where a three-word prepositional phrase is added at the beginning of the sentence, as in example (11). (11) Pprep: Near the shore, several small dogs ran Hprep: Near the shore, several dogs ran The third is the case where the replacement is performed in the object position, as in example (12). (12) Pobj: Some tiger touched several small dogs Hobj: Some tiger touched several dogs We train and test models |perm(Q′)| times, then take the average accuracy as the final evaluation result. II. Systematicity of embedding quantifiers To properly interpret embedding monotonicity, models should detect both (i) the monotonicity direction of each quantifier and (ii) the type of predicate replacements in the embedded argument. The following describes how we test whether models generalize to unseen combinations of embedding quantifiers. We expose models to all primitive combination patterns of quantifiers and predicate replacements like (4)–(9) with a set of non-embedding monotonicity inferences DQ,R 1 and some embedding patterns like (13), where Q1 and Q2 are chosen from a selected set of upward or downward quantifiers such as some or no. We then test the models with an inference with an unseen quantifier several in (14) to evaluate whether models can systematically interpret embedding quantifiers. (13) P: Q1 animals that chased Q2 dogs ran H: Q1 animals that chased Q2 animals ran (14) P: Several animals that chased several dogs ran H: Several animals that chased several animals ran We move from harder to easier tasks of learning embedding quantifiers by gradually changing the training/test split of a set of inferences DQ,R 2 whose depth is 2, i.e., inferences involving one embedded clause. We assume a set Q′ of a pair of upward and downward quantifiers as Q′ ≡{(q↑, q↓) | (q↑, q↓) ⊆Q↑×Q↓}, and consider a set perm(Q′) consisting of permutations of Q′. For each p ∈ perm(Q′), we gradually add a set of inferences D2 generated from p(i) to the training set Si with 1 ≤i ≤n −1. We test models trained with Si on a test set Ti generated from the complement Qi of Qi = {x | ∃y(x, y) ∈Q′ i or ∃y(y, x) ∈Q′ i} where Q′ i = {p(1), . . . , p(i)}, summarized as S0 = DQ,R 1 , Si = Si−1 ∪D {q↑ i ,q↓ i },R 2 , Ti = DQi,R 2 with 1 ≤i ≤n −1 where (q↑ i , q↓ i ) = p(i). We train and test models |perm(Q′)| times, then take the average accuracy as the final evaluation result. III. Productivity Productivity (or recursiveness) is a concept related to systematicity, which refers to the capacity to grasp an indefinite number of natural language sentences or thoughts with generalization on composition. The following describes how we test whether models generalize to unseen deeper depths in embedding monotonicity (see also the right side of Figure 1). For example, we expose models to all primitive nonembedding/single-embedding patterns like (15) and (16) and then test them with deeper embedding patterns like (17). (15) P: Some dogs ran H: Some animals ran (16) P: Some animals which chased some dogs ran H: Some animals which chased some animals ran 6109 Depth Pred. Monotone Arg. Example (premise, hypothesis, label) Avg. Len. 1 CONJ DOWNWARD SECOND Less than three lions left. Less than three lions left and cried. ENTAILMENT 4.6 2 PP UPWARD FIRST Few lions that hurt at most three small dogs walked. Few lions that hurt at most three dogs walked. ENTAILMENT 9.0 3 AdJ DOWNWARD FIRST Some elephant no rabbit which touched a few dogs hit rushed. Some elephant no rabbit which touched a few small dogs hit rushed. ENTAILMENT 12.3 4 RC UPWARD FIRST Less than three tigers which accepted several rabbits that loved several foxes more than three monkeys cleaned dawdled. Less than three tigers which accepted several rabbits that loved several foxes more than three monkeys which ate dinner cleaned dawdled. ENTAILMENT 16.6 Table 1: Examples of generated premise–hypothesis pairs. Depth: depth of embedding; Pred.: type of predicate replacements; Monotone: direction of monotonicity; Arg.: argument where the predicate replacement is performed; Avg. Len.: average sentence length. (17) P: Some animals which chased some cats which followed some dogs ran H: Some animals which chased some cats which followed some animals ran To evaluate models on the set of inferences involving embedded clauses with depths exceeding those in the training set, we train models with ∪ d∈{1,...,i+1} Dd, where we refer to DQ,R d as Dd for short, and test the models on ∪ d∈{i+2,...,s} Dd with 1 ≤i ≤s −2. IV. Localism According to the principle of compositionality, the meaning of a complex expression derives from the meanings of its constituents and how they are combined. One important concern is how local the composition operations should be (Pagin and Westerståhl, 2010). We therefore test whether models trained with inferences involving embedded monotonicity locally perform inferences composed of smaller constituents. Specifically, we train models with examples like (17) and then test the models with examples like (15) and (16). We train models with Dd and test the models on ∪ k∈{1,...,d} Dk with 3 ≤d ≤s . 3 Experimental Setting 3.1 Data creation To prepare the datasets shown in Table 1, we first generate premise sentences involving quantifiers from a set of context-free grammar (CFG) rules and lexical entries, shown in Table 6 in the Appendix. We select 10 words from among nouns, intransitive verbs, and transitive verbs as lexical entries. A set of quantifiers Q consists of eight elements; we use a set of four downward quantifiers Q↓={no, at most three, less than three, few} and a set of four upward quantifiers Q↑={some, at Function Example r1: hyponym dogs ⊑animals r2: adjective small dogs ⊑dogs r3: preposition dogs in the park ⊑dogs r4: relative clause dogs which ate dinner ⊑dogs r5: adverb ran quickly ⊑ran r6: disjunction ran ⊑ran or walked r7: conjunction ran and barked ⊑ran Table 2: Examples of replacement functions. least three, more than three, a few}, which have the same monotonicity directions in the first and second arguments. We thus consider n = |Q↑| = |Q↓|=4 in the protocol in Section 2.2. The ratio of each monotonicity direction (upward/downward) of generated sentences is set to 1 : 1. We then generate hypothesis sentences by applying replacement functions to premise sentences according to the polarities of constituents. The set of replacement functions R is composed of the seven types of lexical replacements and phrasal additions in Table 2. We remove unnatural premise–hypothesis pairs in which the same words or phrases appear more than once. For embedding monotonicity, we consider inferences involving four types of replacement functions in the first argument of the quantifier in Table 2: hyponyms, adjectives, prepositions, and relative clauses. We generate sentences up to the depth d = 5. There are various types of embedding monotonicity, including relative clauses, conditionals, and negated clauses. In this paper, we consider three types of embedded clauses: peripheral-embedding clauses and two kinds of center-embedding clauses, shown in Table 6 in the Appendix. 6110 The number of generated sentences exponentially increases with the depth of embedded clauses. Thus, we limit the number of inference examples to 320,000, split into 300,000 examples for the training set and 20,000 examples for the test set. We guarantee that all combinations of quantifiers are included in the set of inference examples for each depth. Gold labels for generated premise–hypothesis pairs are automatically determined according to the polarity of the argument position (upward/downward) and the type of predicate replacements (with more general/specific phrases). The ratio of each gold label (entailment/non-entailment) in the training and test sets is set to 1 : 1. To double-check the gold label, we translate each premise–hypothesis pair into a logical formula (see the Appendix for more details). The logical formulas are obtained by combining lambda terms in accordance with meaning composition rules specified in the CFG rules in the standard way (Blackburn and Bos, 2005). We prove the entailment relation using the theorem prover Vampire2, checking whether a proof is found in time for each entailment pair. For all pairs, the output of the prover matched with the entailment relation automatically determined by monotonicity calculus. 3.2 Models We consider three DNN-based NLI models. The first architecture employs long short-term memory (LSTM) networks (Hochreiter and Schmidhuber, 1997). We set the number of layers to three with no attention. Each premise and hypothesis is processed as a sequence of words using a recurrent neural network with LSTM cells, and the final hidden state of each serves as its representation. The second architecture employs multiplicative tree-structured LSTM (TreeLSTM) networks (Tran and Cheng, 2018), which are expected to be more sensitive to hierarchical syntactic structures. Each premise and hypothesis is processed as a tree structure by bottomup combinations of constituent nodes using the same shared compositional function, input word information, and between-word relational information. We parse all premise–hypothesis pairs with the dependency parser using the spaCy li2https://github.com/vprover/vampire brary3 and obtain tree structures. For each experimental setting, we randomly sample 100 tree structures and check their correctness. In LSTM and TreeLSTM, the dimension of hidden units is 200, and we initialize the word embeddings with 300-dimensional GloVe vectors (Pennington et al., 2014). Both models are optimized with Adam (Kingma and Ba, 2015), and no dropout is applied. The third architecture is a Bidirectional Encoder Representations from Transformers (BERT) model (Devlin et al., 2019). We used the baseuncased model pre-trained on Wikipedia and BookCorpus from the pytorch-pretrained-bert library4, fine-tuned for the NLI task using our dataset. In fine-tuning BERT, no dropout is applied, and we choose hyperparameters that are commonly used for MultiNLI. We train all models over 25 epochs or until convergence, and select the best-performing model based on its performance on the validation set. We perform five runs per model and report the average and standard deviation of their scores. 4 Experiments and Discussion I. Systematicity of predicate replacements Figure 2 shows the performance on unseen combinations of quantifiers and predicate replacements. In the minimal training set S1, the accuracy of LSTM and TreeLSTM was almost the same as chance, but that of BERT was around 75%, suggesting that only BERT generalized to unseen combinations of quantifiers and predicate replacements. When we train BERT with the training set S2, which contains inference examples generated from combinations of one pair of upward/downward quantifiers and all predicate replacements, the accuracy was 100%. This indicates that by being taught two kinds of quantifiers in the training data, BERT could distinguish between upward and downward for the other quantifiers. The accuracy of LSTM and TreeLSTM increased with increasing the training set size, but did not reach 100%. This indicates that LSTM and TreeLSTM also generalize to inferences involving similar quantifiers to some extent, but their generalization ability is imperfect. When testing models with inferences where adverbs or prepositional phrases are added to the be3https://spacy.io/ 4https://github.com/huggingface/pytorch-pretrained-bert 6111 Figure 2: Results for systematicity of predicate replacements. Accuracy on test sets where (a) the replacement is performed in the subject position, (b) one adverb is added at the beginning of the sentence, (c) one three-word prepositional phrase is added at the beginning of the sentence, and (d) the replacement is in the object position. Sn indicates the experimental setting where the training set Sn is used. ginning of the sentence, the accuracy of all models significantly decreased. This decrease becomes larger as the syntactic structures of the sentences in the test set become increasingly different from those in the training set. Contrary to our expectations, the models fail to maintain accuracy on test sets whose difference from the training set is the structure with the adverb at the beginning of a sentence. Of course, we could augment datasets involving that structure, but doing so would require feeding all combinations of inference pairs into the models. These results indicate that the models tend to estimate the entailment label from the beginning of a premise–hypothesis sentence pair, and that inferential systematicity to draw inferences involving quantifiers and predicate replacements is not completely generalized at the level of arbitrary constituents. II. Systematicity of embedding quantifiers Figure 3 shows the performance of all models on unseen combinations of embedding quantifiers. Even when adding the training set of inferences involving one embedded clause and two quantifiers step-by-step, no model showed improved performance. The accuracy of BERT slightly exceeded chance, but the accuracy of LSTM and TreeLSTM was nearly the same as or lower than chance. These results suggest that all the models fail to generalize to unseen combinations of embedding quantifiers even when they involve similar upward/downward quantifiers. III. Productivity Table 3 shows the performance on unseen depths of embedded clauses. The accuracy on D1 and D2 was nearly 100%, indicating that all models almost completely generalize to inferences containing previously seen depths. When Figure 3: Results for systematicity of embedding quantifiers. Sn indicates the experimental setting where the training set Sn is used. D1+D2 were used as the training set, the accuracy of all models on D3 exceeded chance. Similarly, when D1 + D2 + D3 were used as the training set, the accuracy of all models on D4 exceeded chance. This indicates that all models partially generalize to inferences containing embedded clauses one level deeper than the training set. However, standard deviations of BERT and LSTM were around 10, suggesting that these models did not consistently generalize to inferences containing embedded clauses one level deeper than the training set. While the distribution of monotonicity directions (upward/downward) in the training and test sets was uniform, the accuracy of LSTM and BERT tended to be smaller for downward inferences than for upward inferences. This also indicates that these models fail to properly compute monotonicity directions of constituents from syntactic structures. The standard deviation of TreeLSTM was smaller, indicating that TreeLSTM robustly learns inference patterns containing embedded clauses one level deeper than the training set. 6112 Train Dev/Test BERT LSTM TreeLSTM D1 + D2 D1 100.0±0.0 100.0±0.0 100.0±0.1 D2 100.0±0.0 99.8±0.2 99.5±0.1 D3 75.2±10.0 75.4±10.8 86.4±4.1 D4 55.0±3.7 57.7±8.7 58.6±7.8 D5 49.9±4.4 45.8±4.0 48.4±3.7 D3 (down) 71.2±4.0 70.4±4.0 86.4±4.1 D3 (up) 80.5±7.5 84.7±4.9 86.4±4.1 D1 + D2 + D3 D1 100.0±0.0 100.0±0.0 100.0±0.0 D2 100.0±0.0 95.1±7.8 99.6±0.0 D3 100.0±0.0 85.2±8.9 97.7±1.1 D4 77.9±10.8 59.7±10.8 68.0±5.6 D5 53.5±19.6 55.1±8.2 49.6±4.3 D4 (down) 85.8±10.5 76.9±6.6 68.0±5.6 D4 (up) 86.8±1.8 81.1±5.6 68.0±5.6 Table 3: Results for productivity. Dd indicates the set of inferences where the embedding depth is d. Train Dev/Test BERT LSTM TreeLSTM D3 D1 49.6±0.5 48.8±13.2 49.8±4.1 D2 49.8±0.6 47.3±12.1 51.8±1.1 D3 100.0±0.0 100.0±0.0 100.0±0.2 D4 D1 50.3±1.0 46.8±6.5 49.0±0.4 D2 49.6±0.8 45.4±1.8 49.7±0.3 D3 50.2±0.7 45.1±0.6 50.5±0.7 D4 100.0±0.0 100.0±0.0 100.0±0.1 D5 D1 49.9±0.7 43.7±4.4 49.1±1.1 D2 49.1±0.3 43.4±3.9 51.4±0.6 D3 50.6±0.2 44.3±2.7 50.5±0.3 D4 50.9±0.8 44.4±3.4 50.3±0.4 D5 100.0±0.0 100.0±0.0 100.0±0.1 Table 4: Results for localism. However, the performance of all models trained with D1 + D2 on D4 and D5 significantly decreased. Also, performance decreased for all models trained with D1 +D2 +D3 on D5. Specifically, there was significantly decreased performance of all models, including TreeLSTM, on inferences containing embedded clauses two or more levels deeper than those in the training set. These results indicate that all models fail to develop productivity on inferences involving embedding monotonicity. IV. Localism Table 4 shows the performance of all models on localism of embedding monotonicity. When the models were trained with D3, D4 or D5, all performed at around chance on the test set of non-embedding inferences D1 and the test set of inferences involving one embedded clause D2. These results indicate that even if models are trained with a set of inferences containing complex syntactic structures, the models fail to locally interpret their constituents. Performance of data augmentation Prior studies (Yanaka et al., 2019b; Richardson et al., 2020) have shown that given BERT initially trained with Train Dev/Test BERT LSTM TreeLSTM MNLI D1 46.9±0.4 47.2±1.1 43.4±0.3 D2 46.2±0.6 48.3±1.0 49.5±0.4 D3 46.8±0.8 48.9±0.7 41.0±0.4 D4 48.5±0.8 50.6±0.5 48.5±0.2 D5 48.9±0.6 49.3±0.7 48.8±0.5 MNLI-test 84.6±0.2 64.7±0.3 70.4±0.1 D1 + D2 D1 100.0±0.0 100.0±0.1 100.0±0.1 +MNLI D2 100.0±0.0 89.3±9.0 99.8±0.1 D3 67.8±12.5 66.7±13.5 76.3±4.1 D4 46.8±3.7 47.1±14.6 50.7±7.8 D5 41.2±4.3 46.7±11.2 47.5±3.7 MNLI-test 84.4±0.2 39.7±0.5 63.0±0.2 D1 + D2 + D3 D1 100.0±0.0 100.0±0.0 100.0±0.0 +MNLI D2 100.0±0.0 97.1±5.0 99.8±0.0 D3 100.0±0.0 89.2±5.1 98.3±1.1 D4 70.9±7.9 73.4±10.9 76.1±5.6 D5 42.4±4.2 47.8±3.9 57.0±4.3 MNLI-test 84.0±0.1 39.7±0.4 62.8±0.2 Table 5: Results for productivity where models were trained with our synthesized dataset mixed with MultiNLI (MNLI). MultiNLI, further training with synthesized instances of logical inference improves performance on the same types of logical inference while maintaining the initial performance on MultiNLI. To investigate whether the results of our study are transferable to current work on MultiNLI, we trained models with our synthesized dataset mixed with MultiNLI, and checked (i) whether our synthesized dataset degrades the original performance of models on MultiNLI5 and (ii) whether MultiNLI degrades the ability to generalize to unseen depths of embedded clauses. Table 5 shows that training BERT on our synthetic data D1 + D2 and MultiNLI increases the accuracy on our test sets D1 (46.9 to 100.0), D2 (46.2 to 100.0), and D3 (46.8 to 67.8) while preserving accuracy on MultiNLI (84.6 to 84.4). This indicates that training BERT with our synthetic data does not degrade performance on commonly used corpora like MultiNLI while improving the performance on monotonicity, which suggests that our data-synthesis approach can be combined with naturalistic datasets. For TreeLSTM and LSTM, however, adding our synthetic dataset decreases accuracy on MultiNLI. One possible reason for this is that a pre-training based model like BERT can mitigate catastrophic forgetting in various types of datasets. Regarding the ability to generalize to unseen depths of embedded clauses, the accuracy of all 5Following the previous work (Richardson et al., 2020), we used the MultiNLI mismatched development set for MNLI-test. 6113 models on our synthetic test set containing embedded clauses one level deeper than the training set exceeds chance, but the improvement becomes smaller with the addition of MultiNLI. In particular, with the addition of MultiNLI, the models tend to change wrong predictions in cases where a hypothesis contains a phrase not occurring in a premise but the premise entails the hypothesis. Such inference patterns are contrary to the heuristics in MultiNLI (McCoy et al., 2019). This indicates that there may be some trade-offs in terms of performance between inference patterns in the training set and those in the test set. 5 Related Work The question of whether neural networks are capable of processing compositionality has been widely discussed (Fodor and Pylyshyn, 1988; Marcus, 2003). Recent empirical studies illustrate the importance and difficulty of evaluating the capability of neural models. Generation tasks using artificial datasets have been proposed for testing whether models compositionally interpret training data from the underlying grammar of the data (Lake and Baroni, 2017; Hupkes et al., 2018; Saxton et al., 2019; Loula et al., 2018; Hupkes et al., 2019; Bernardy, 2018). However, these conclusions are controversial, and it remains unclear whether the failure of models on these tasks stems from their inability to deal with compositionality. Previous studies using logical inference tasks have also reported both positive and negative results. Assessment results on propositional logic (Evans et al., 2018), first-order logic (Mul and Zuidema, 2019), and natural logic (Bowman et al., 2015) show that neural networks can generalize to unseen words and lengths. In contrast, Geiger et al. (2019) obtained negative results by testing models under fair conditions of natural logic. Our study suggests that these conflicting results come from an absence of perspective on combinations of semantic properties. Regarding assessment of the behavior of modern language models, Linzen et al. (2016), Tran et al. (2018), and Goldberg (2019) investigated their syntactic capabilities by testing such models on subject–verb agreement tasks. Many studies of NLI tasks (Liu et al., 2019; Glockner et al., 2018; Poliak et al., 2018; Tsuchiya, 2018; McCoy et al., 2019; Rozen et al., 2019; Ross and Pavlick, 2019) have provided evaluation methodologies and found that current NLI models often fail on particular inference types, or that they learn undesired heuristics from the training set. In particular, recent works (Yanaka et al., 2019a,b; Richardson et al., 2020) have evaluated models on monotonicity, but did not focus on the ability to generalize to unseen combinations of patterns. Monotonicity covers various systematic inferential patterns, and thus is an adequate semantic phenomenon for assessing inferential systematicity in natural language. Another benefit of focusing on monotonicity is that it provides hard problem settings against heuristics (McCoy et al., 2019), which fail to perform downward-entailing inferences where the hypothesis is longer than the premise. 6 Conclusion We introduced a method for evaluating whether DNN-based models can learn systematicity of monotonicity inference under four aspects. A series of experiments showed that the capability of three models to capture systematicity of predicate replacements was limited to cases where the positions of the constituents were similar between the training and test sets. For embedding monotonicity, no models consistently drew inferences involving embedded clauses whose depths were two levels deeper than those in the training set. This suggests that models fail to capture inferential systematicity of monotonicity and its productivity. We also found that BERT trained with our synthetic dataset mixed with MultiNLI maintained performance on MultiNLI while improving the performance on monotonicity. This indicates that though current DNN-based models do not systematically interpret monotonicity inference, some models might have sufficient ability to memorize different types of reasoning. We hope that our work will be useful in future research for realizing more advanced models that are capable of appropriately performing arbitrary inferences. Acknowledgement We thank the three anonymous reviewers for their helpful comments and suggestions. We are also grateful to Benjamin Heinzerling and Sosuke Kobayashi for helpful discussions. This work was partially supported by JSPS KAKENHI Grant Numbers JP20K19868 and JP18H03284, Japan. 6114 References Lasha Abzianidze. 2015. A tableau prover for natural logic and language. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP-2015), pages 2492–2502. Murat Aydede. 1997. Language of thought: The connectionist contribution. Minds and Machines, 7(1):57–101. Johan van Benthem. 1983. Determiners and logic. Linguistics and Philosophy, 6(4):447–478. Jean-Philippe Bernardy. 2018. Can recurrent neural networks learn nested recursion. Linguistic Issues in Language Technology, 16. Patrick Blackburn and Johan Bos. 2005. Representation and Inference for Natural Language: A First Course in Computational Semantics. Center for the Study of Language and Information. Samuel R. Bowman, Christopher Potts, and Christopher D. Manning. 2015. Recursive neural networks can learn logical semantics. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 12–21. Ido Dagan, Dan Roth, Mark Sammons, and Fabio Massimo Zanzotto. 2013. Recognizing Textual Entailment: Models and Applications. Morgan & Claypool Publishers. Jacob Devlin, Chang Ming-Wei, Lee Kenton, and Toutanova Kristina. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT-2019), pages 4171–4186. Richard Evans, David Saxton, David Amos, Pushmeet Kohli, and Edward Grefenstette. 2018. Can neural networks understand logical entailment? In International Conference on Learning Representations (ICLR-2018). Jerry A. Fodor and Zenon W. Pylyshyn. 1988. Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2):3–71. Atticus Geiger, Ignacio Cases, Lauri Karttunen, and Christopher Potts. 2019. Posing fair generalization tasks for natural language inference. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP-2019), pages 4484–4494. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that require simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL-2018), pages 650–655. Yoav Goldberg. 2019. Assessing BERT’s syntactic abilities. CoRR, abs/1901.05287. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780. Hai Hu, Qi Chen, Kyle Richardson, Atreyee Mukherjee, Lawrence S. Moss, and Sandra Kübler. 2019. Monalog: a lightweight system for natural language inference based on monotonicity. CoRR, abs/1910.08772. Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and Elia Bruni. 2019. Compositionality decomposed: how do neural networks generalise? CoRR, abs/1908.08351. Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and ‘diagnostic classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907–926. Thomas Icard and Lawrence Moss. 2014. Recent progress in monotonicity. Linguistic Issues in Language Technology, 9(7):167–194. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR-2015). Brenden M. Lake and Marco Baroni. 2017. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In International Conference on Machine Learning (ICML-2017). Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521– 535. Nelson F. Liu, Roy Schwartz, and Noah A. Smith. 2019. Inoculation by fine-tuning: A method for analyzing challenge datasets. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT-2019), pages 2171–2179. João Loula, Marco Baroni, and Brenden Lake. 2018. Rearranging the familiar: Testing compositional generalization in recurrent networks. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 108–114. Gary Marcus. 2003. The Algebraic Mind: Integrating Connectionism and Cognitive Science. MIT Press. R. Thomas McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In 6115 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL-2019), pages 3428–3448. Koji Mineshima, Pascual Martínez-Gómez, Yusuke Miyao, and Daisuke Bekki. 2015. Higher-order logical inference with compositional semantics. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP2015), pages 2055–2061. Mathijs Mul and Willem Zuidema. 2019. Siamese recurrent networks learn first-order logic reasoning and exhibit zero-shot compositional generalization. CoRR, abs/1908.08351. Peter Pagin and Dag Westerståhl. 2010. Compositionality I: Definitions and variants. Philosophy Compass, 5(3):250–264. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP-2014), pages 1532–1543. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics (*SEM-2018), pages 180–191. Kyle Richardson, Hai Hu, Lawrence S. Moss, and Ashish Sabharwal. 2020. Probing natural language inference models through semantic fragments. In Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI-2020). Alexis Ross and Ellie Pavlick. 2019. How well do NLI models capture verb veridicality? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP-2019), pages 2230–2240. Ohad Rozen, Vered Shwartz, Roee Aharoni, and Ido Dagan. 2019. Diversify your datasets: Analyzing generalization via controlled variance in adversarial datasets. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL-2019), pages 196–205. David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. 2019. Analysing mathematical reasoning abilities of neural models. In International Conference on Learning Representations (ICLR-2019). Ke Tran, Arianna Bisazza, and Christof Monz. 2018. The importance of being recurrent for modeling hierarchical structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP-2018), pages 4731– 4736. Nam Khanh Tran and Weiwei Cheng. 2018. Multiplicative tree-structured long short-term memory networks for semantic representations. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics (*SEM-2018), pages 276– 286. Masatoshi Tsuchiya. 2018. Performance impact caused by hidden bias of training data for recognizing textual entailment. In Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC-2018). Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the International Conference on Learning Representations (ICLR-2019). Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACLHLT-2018), pages 1112–1122. Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, and Johan Bos. 2019a. Can neural networks understand monotonicity reasoning? In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 31–40. Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, and Johan Bos. 2019b. HELP: A dataset for identifying shortcomings of neural models in monotonicity reasoning. In Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 250–255. A Appendix A.1 Lexical entries and replacement examples Table 6 shows a context-free grammar and a set of predicate replacements used to generate inference examples. Regarding the context-free grammar, we consider premise–hypothesis pairs containing the quantifier Q in the subject position, and the predicate replacement is performed in both the first and second arguments of the quantifier. When generating premise–hypothesis pairs involving embedding monotonicity, we consider inferences involving four types of predicate replacements (hyponyms Nhypn, adjectives Adj, prepositions PP, and relative clauses RelC) in the 6116 Context-free grammar for premise sentences S → NP IV1 NP → Q N | Q N S S → WhNP TV NP | WhNP NP TV | NP TV Lexicon Q → {no, at most three, less than three, few, some, at least three, more than three, a few} N → {dog, rabbit, lion, cat, bear, tiger, elephant, fox, monkey, wolf} IV1 → {ran, walked, came, waltzed, swam, rushed, danced, dawdled, escaped, left} IV2 → {laughed, groaned, roared, screamed, cried} TV → {kissed, kicked, hit, cleaned, touched, loved, accepted, hurt, licked, followed} WhNP → {that, which} Nhypn → {animal, creature, mammal, beast} Adj → {small, large, crazy, polite, wild} PP → {in the area, on the ground, at the park, near the shore, around the island} RelC → {which ate dinner, that liked flowers, which hated the sun, that stayed up late} Adv → {slowly, quickly, seriously, suddenly, lazily} Predicate replacements for hypothesis sentences N to Nhypn | Adj N | N PP | N RelC IV1 to IV1 Adv | IV1 PP | IV1 or IV2 | IV1 and IV2 Table 6: A context-free grammar and a set of predicate replacements used to generate inference examples. Predicate replacement is applied to N or IV1, replacing it with a corresponding phrase. first argument of the quantifier. To generate natural sentences consistently, we use the past tense for verbs; for lexical entries and predicate replacements, we select those that do not violate selectional restriction. To check the gold labels for the generated premise–hypothesis pairs, we translate each sentence to a first-order logic (FOL) formula and test if the entailment relation holds by theorem proving. The FOL formulas are compositionally derived by combining lambda terms assigned to each lexical item in accordance with meaning composition rules specified in the CFG rules in the standard way (Blackburn and Bos, 2005). Since our purpose is to check the polarity of monotonicity marking, vague quantifiers such as few are represented according to their polarity. For example, we map the quantifier few onto the lambda-term λPλQ¬∃x(few(x) ∧P(x) ∧Q(x)). A.2 Results on embedding monotonicity Table 7 shows all results on embedding monotonicity. This indicates that all models partially generalize to inferences containing embedded clauses one level deeper than the training set, but fail to generalize to inferences containing embedded clauses two or more levels deeper. 6117 Train Test BERT LSTM TreeLSTM D1 D1 100.0±0.0 91.1±5.4 100.0±0.0 D2 44.1±6.4 34.1±3.8 48.1±1.2 D3 47.6±3.2 45.1±5.1 48.5±1.8 D4 49.6±1.0 44.4±6.5 50.1±2.1 D5 49.9±1.1 44.1±5.3 50.3±1.1 D1 ∪D2 D1 100.0±0.0 100.0±0.0 100.0±0.1 D2 100.0±0.0 99.8±0.2 99.5±0.1 D3 75.2±10.0 75.4±10.8 86.4±4.1 D4 55.0±3.7 57.7±8.7 58.6±7.8 D5 49.9±4.4 45.8±4.0 48.4±3.7 D1 ∪D2 ∪D3 D1 100.0±0.0 100.0±0.0 100.0±0.0 D2 100.0±0.0 95.1±7.8 99.6±0.0 D3 100.0±0.0 85.2±8.9 97.7±1.1 D4 77.9±10.8 59.7±10.8 68±5.6 D5 53.5±19.6 55.1±8.2 49.6±4.3 D1 ∪D2 ∪D3 ∪D4 D1 100.0±0.0 100.0±0.0 100.0±0.1 D2 100.0±0.0 99.4±1.1 99.7±0.2 D3 100.0±0.0 91.5±4.0 98.9±1.1 D4 100.0±0.0 74.1±4.2 94.0±2.3 D5 89.1±5.4 64.2±4.7 69.5±4.1 D1 ∪D2 ∪D3 ∪D4 ∪D5 D1 100.0±0.0 100.0±0.0 100.0±0.1 D2 100.0±0.0 95.8±7.3 99.8±0.1 D3 100.0±0.0 90.5±13.1 99.1±0.2 D4 100.0±0.0 90.2±6.0 94.8±0.1 D5 100.0±0.0 93.6±3.1 83.2±12.1 D2 D1 36.4±14.4 25.3±9.3 44.9±4.1 D2 100.0±0.0 100.0±0.0 100.0±0.2 D3 47.6±10.3 43.9±17.5 51.8±1.1 D4 61.7±7.8 57.9±14.7 51.7±0.6 D5 42.6±5.1 47.2±2.9 50.9±0.4 D3 D1 49.6±0.5 48.8±13.2 49.8±4.1 D2 49.8±0.6 47.3±12.1 51.8±1.1 D3 100.0±0.0 100.0±0.0 100.0±0.2 D4 49.7±1.0 42.0±0.6 51.3±0.7 D5 50.0±0.4 38.4±9.6 49.8±0.3 D4 D1 50.3±1.0 46.8±6.5 49.0±0.4 D2 49.6±0.8 45.4±1.8 49.7±0.3 D3 50.2±0.7 45.1±0.6 50.5±0.7 D4 100.0±0.0 100.0±0.0 100.0±0.1 D5 49.7±0.5 45.1±0.9 50.5±1.1 D5 D1 49.9±0.7 43.7±4.4 49.1±1.1 D2 49.1±0.3 43.4±3.9 51.4±0.6 D3 50.6±0.2 44.3±2.7 50.5±0.3 D4 50.9±0.8 44.4±3.4 50.3±0.4 D5 100.0±0.0 100.0±0.0 100.0±0.1 Table 7: All results on embedding monotonicity.
2020
543
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6118–6129 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6118 Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder Daya Guo1∗, Duyu Tang2, Nan Duan2, Jian Yin1, Daxin Jiang3 and Ming Zhou2 1 The School of Data and Computer Science, Sun Yat-sen University. Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou, P.R.China 2 Microsoft Research Asia, Beijing, China 3 Microsoft Search Technology Center Asia, Beijing, China {guody5@mail2,issjyin@mail}.sysu.edu.cn {dutang,nanduan,djiang,mingzhou}@microsoft.com Abstract Generating inferential texts about an event in different perspectives requires reasoning over different contexts that the event occurs. Existing works usually ignore the context that is not explicitly provided, resulting in a context-independent semantic representation that struggles to support the generation. To address this, we propose an approach that automatically finds evidence for an event from a large text corpus, and leverages the evidence to guide the generation of inferential texts. Our approach works in an encoderdecoder manner and is equipped with a Vector Quantised-Variational Autoencoder, where the encoder outputs representations from a distribution over discrete variables. Such discrete representations enable automatically selecting relevant evidence, which not only facilitates evidence-aware generation, but also provides a natural way to uncover rationales behind the generation. Our approach provides state-ofthe-art performance on both Event2Mind and ATOMIC datasets. More importantly, we find that with discrete representations, our model selectively uses evidence to generate different inferential texts. 1 Introduction Inferential text generation aims to understand dailylife events and generate texts about their underlying causes, effects, and mental states of event participants, which is crucial for automated commonsense reasoning. Taking Figure 1 as an example, given an event “PersonX reads PersonY’s diary”, the cause of the participant “PersonX” is to “obtain Person Y’s secrets” and the mental state of “PersonX” is “guilty”. Standard approaches for inferential text generation (Rashkin et al., 2018; Sap et al., 2019; Bosselut et al., 2019; Du et al., 2019) typically only ∗Work done while this author was an intern at Microsoft Research. PersonX stole PersonY’s diary secretly PersonY invites PersonX to read his diary know more about PersonY PersonX feels Event Background Inferences obtain PersonY’s secrets PersonX wants to PersonX reads PersonY’s diary guilty curious Figure 1: An examples of inferential text generation on mental states of event participants. We show two kinds of reasonable inferences for the event under different background knowledge that is absent in the dataset. take the event as the input, while ignoring the background knowledge that provides crucial evidence to generate reasonable inferences. For example, if the background knowledge of this example is “PersonY invites PersonX to read his diary”, the outputs should be different. In this paper, we present an evidence-aware generative model, which first retrieves relevant evidence from a large text corpus and then leverages retrieved evidence to guide the generation of inferential texts. Our model is built upon Transformerbased (Vaswani et al., 2017) encoder-decoder architecture, and is equipped with Vector QuantisedVariational Autoencoder to map an event to a discrete latent representation (van den Oord et al., 2017). These discrete representations embody the latent semantic distribution of inferences given the event, thus supporting selection of relevant evidence as background knowledge to guide the generation in different perspectives. Furthermore, our model has two attractive properties: (1) it avoids the problem of posterior collapse, caused by latent variables being ignored, in traditional variational autoencoder with continuous latent variables (van den Oord et al., 2017), and more importantly (2) it uncovers the rationale of a generation to some extent through tracing back the evidence that guides the generation and the selected discrete representation of the event. 6119 We evaluate our approach on Event2Mind (Rashkin et al., 2018) and ATOMIC (Sap et al., 2019) datasets, both of which focus on reasoning about causes and effects of events and mental states of event participants. Experimental results show that our approach achieves state-ofthe-art performances on both datasets. Further analysis shows that our approach can equip the generation with an explicit control over the semantics of latent variables and selected evidence to generate inferential texts in different perspective. The source codes are available at https: //github.com/microsoft/EA-VQ-VAE. 2 Task Definition and Datasets Figure 1 shows an example of the task, which aims to generate inferential texts about causes and effects of daily-life events and mental states of the events participants. Formally, given an event x = {x1, x2, .., xn} and an inference dimension r such as causes of the event, the goal is to generate multiple inferential texts Y = {y(1), y(2), ..., y(m)}1, where the background knowledge of the event is absent in the dataset. We conduct experiments on Event2Mind2 (Rashkin et al., 2018) and ATOMIC3 (Sap et al., 2019) datasets. Both datasets contain about 25,000 unique events extracted from multiple data sources and provide multiple inferences under different inference dimensions by crowd-sourcing on Amazon Mechanical Turk. Event2Mind and ATOMIC contain 2.6 and 3.6 inferences on average per example, respectively. Event2Mind focuses on three inference dimensions related to mental states of participants (i.e. intents and reactions of the events participants), while ATOMIC has broader inference dimensions including mental states, probable preand post conditions of the event, and persona status. More details about the two datasets are provided in the Appendix A. 3 Overview of the Approach We present our approach in this section, which first retrieves relevant evidence from a large text corpus, and then utilizes retrieved evidence as background knowledge to generate inferences. Figure 2 gives an overview of our approach. 1We use inference and inferential text interchangably 2https://uwnlp.github.io/Event2Mind/ 3https://homes.cs.washington.edu/ ˜msap/ATOMIC/ Event Text Corpus Decoder Evidence Retrieval Evidence ݖ VQ-VAE Inferential Text Figure 2: An overview of our approach. First, our encoder takes an event as the input and outputs a semantic representation z from a distribution over discrete latent variables, which is based on Vector Quantised-Variational Autoencoder (VQVAE) (van den Oord et al., 2017). We then use the event as a query to retrieve top K evidence from a large text corpus as background knowledge. Lastly, the evidence-aware decoder takes the semantic representation and evidence as the input and generates the inference y, where the semantic representation selectively uses relevant evidence as background knowledge to guide the generation of inferences. 3.1 Vector Quantised-Variational Autoencoder Figure 3 illustrates the model architecture of our approach. The model is based on encoder-decoder framework equipped with Vector QuantisedVariational Autoencoder (VQ-VAE) (van den Oord et al., 2017), where the VQ-VAE is learned to model the latent semantic distribution within inferences given an event. Latent variables z from the VQ-VAE will be used to calculate the relevant of retrieved evidence in the semantic space to guide the generation. Compared with continuous VAEs, VQ-VAE does not suffer from “posterior collapse” issues that latent variables are often ignored with a powerful decoder (van den Oord et al., 2017). VQVAE mainly consists of three parts: a codebook for modeling the latent semantic distribution within inferences over discrete latent variables, a recognition network for modeling a posterior distribution qφ(z|x, y), and a prior network for inferring a prior distribution pθ(z|x). 6120 Event ݄ ݌ሺȉሻcodebook encoder decoder Text Corpus evidence retrieval ݖ sentence 1 sentence 2 sentence 3 sentence 4 sentence 5 inferential text Figure 3: The model architecture of our approach. Codebook A codebook aims to model the latent semantic discrete distribution within inferences, which is composed of k discrete latent variables (i.e. k-way categorical). We define the codebook as an embedding table T ∈Rk×d, where d is the dimension of latent variables. The semantic latent variable z is indexed from the posterior distribution qφ(z|x, y) in the training phase and the prior distribution pθ(z|x) in the inference phase over the codebook, respectively. Posterior Distribution We follow van den Oord et al. (2017) to model a discrete posterior distribution qφ(z|x, y) over the codebook. First, we use Transformer (Vaswani et al., 2017) with two layers as our encoder, where the input sequence is the concatenation of an event x and its inference y. In order to obtain the representation of an example (x, y), we add a special token in the last of the input sequence and take the hidden state h(x,y) of the special token as the representation of the example. The posterior categorical probability distribution qφ(z|x, y) is defined as one-hot as follows. qφ(zk|x, y) = ⎧ ⎨ ⎩ 1 if k = arg min j ||h(x,y) −zj||2 0 otherwise (1) As we can see, the hidden state h(x,y) of the example is mapped onto the nearest element z′ of the codebook under the posterior distribution qφ(z|x, y). z′ = zk where k = arg min j ||h(x,y)−zj||2 (2) Prior Distribution In the inference phase, only the event x is given, which requires a prior distribution estimator to infer the prior distribution pθ(z|x). Since the prior distribution is crucial for the inference phase, we use a powerful pre-trained language model such as RoBERTa (Liu et al., 2019) to encode the event into a hidden state h. Since the prior distribution is categorical, we then use a k-way classifier following a softmax function to infer the prior distribution, where Wk ∈Rd×k is the model parameters. pθ(z|x) = softmax(hWk) (3) The training detail of the VQ-VAE will be introduced in the Section 3.4. 3.2 Evidence Retrieval In this section, we describe how to retrieve eventrelated evidence as background knowledge. Given an event, we expect that retrieved evidence can contain the event and provide its context as a clue to guide the generation. To retrieve event-related evidence, we use the event as a query to search evidence from a large text corpus. Specifically, we first remove stop words in the given event and then concatenate the words as a query to search evidence from the corpus by Elastic Search engine4. The engine ranks the matching scores between the query and all sentences using BM25 and select top K sentences as evidence C = {c1, c2, ..., cK}. To provide detailed context about the event, we build our corpus upon BooksCorpus (Zhu et al., 2015) that consists of 11,038 story books, since stories usually give a detailed account of an event such as causes and effects of the event. 3.3 Evidence-Aware Decoder In this section, we propose an evidence-aware decoder, which consists of two components, evidence selection and a generator, respectively. Evidence selection aims to calculate a context distribution 4https://www.elastic.co/ 6121 ps(c|z) given a latent variable z to model the relevance of retrieved evidence, while the generator pm(y|x, c) takes an event x and evidence c as the input to generate the inferential text y. 3.3.1 Evidence Selection The relevance of retrieved evidence is different depending on the semantics of inference, which requires a context distribution to model the relevance. For examples, given an event “PersonX reads PersonY’s diary” and its inference “PersonX feels guilty”, the relevance of the evidence “PersonX stole PersonY’s diary” should be higher than that of the evidence “PersonY invites PersonX to read his diary”. However, inferences are unseen in the inference phase, thus we cannot use inferences to model the context distribution. Instead, we utilize semantic latent variables from the VQVAE that models the latent semantic distribution of inferences given an event to calculate the relevance of retrieved evidence. Evidence selection aims to calculate a context distribution ps(c|z) over retrieved evidence given a semantic latent variable z to model the relevance of retrieved evidence. Considering that term-based retrieval (i.e. BM25) may fail to retrieve relevant evidences and all retrieved evidence cannot support the generation, we add an empty evidence cφ into the set C of retrieved evidence as the placeholder. We first use Transformer with two layers to encode retrieved evidence into context vectors HC = {hc1, hc2, .., hcK, hcφ} in the semantic space. Then, the context distribution ps(c|z) over retrieved evidence given the semantic latent variable z is calculated as one-hot as follows. ps(ck|z) = ⎧ ⎨ ⎩ 1 if k = arg min j ||hcj −z||2 0 otherwise (4) As we can see, the latent variable z is mapped onto the nearest element cz of the retrieved evidence under the context distribution ps(c|z). cz = ck where k = arg min j ||hcj −z||2 (5) Another “soft” distribution such as using an attention mechanism to calculate the relevance of retrieved evidence can also model the context distribution, but we choose the one-hot distribution as our context distribution since it maps the latent variable z onto the nearest element of the retrieved evidence, the property of which can help effectively learn the model (described in the Section 3.4). 3.3.2 Generator Recently, Transformer-based (Vaswani et al., 2017) language models like GPT-2 (Radford et al., 2019) have achieved strong performance in text generation, which is pre-trained from a large-scale text corpus and then fine-tuned on downstream tasks. In this work, we use the GPT-2 pm(y|x, c) as the backbone of our generator and further take retrieved evidence into account. A general approach to utilize evidence to guide the generation is to calculate the context vector hc = K+1 i=1 ps(ci|z)hci as the input of GPT-2 according to the relevance ps(c|z) of retrieved evidence. However, this approach changes the architecture of GPT-2, invalidating the original weights of pre-trained GPT-2. Instead, we sample an evidence c from the context distribution ps(c|z) and then concatenate the event and the selected evidence as the input. To make the paper self-contained, we briefly describe the GPT-2, which takes an evidence and an event as the input and generates the inference y = {y1, y2, .., yn}. This model applies N transformer layers over the input tokens to produce an output distribution over target tokens: h0 = [c; x; y<t]We + Wp hl = transformerl−1(hl−1) p(yt) = softmax(hN−1 last W T e ) (6) where We is the token embedding matrix, Wp is the position embedding matrix, and hN−1 last is the hidden state of the last token on the top layer. Each transformer layer transformerl−1 contains an architecturally identical transformer block that applies a masked multi-headed self-attention operation followed by a feed forward layer over the input hl−1 in the l-th layer. ˆgl = MultiAttn(hl−1) gl = LN(ˆgl + hl−1) ˆhl = FFN(gl) hl = LN(ˆhl + gl) (7) where MultiAttn is a masked multi-headed selfattention mechanism, which is similar to Vaswani et al. (2017), FFN is a two layers feed forward network, and LN represents a layer normalization operation (Ba et al., 2016). 6122 3.4 Training Our entire approach corresponds to the following generative process. Given an event x, we first sample a latent variable z from the VQ-VAE pθ(z|x). We then select relevant evidence c according to the semantics of the latent variable from the context distribution ps(c|z). Finally, the generator pm(y|x, c) takes the event x and the selected evidence c as the input and generate the inference y. Therefore, the probability distribution p(y|x) over inferences y given the event x is formulated as follow. p(y|x) =  z∈T  c∈C pm(y|x, c)ps(c|z)pθ(z|x) (8) A straightforward method for learning our model might be maximizing the marginal likelihood by joint learning, but it is computationally intractable. Instead, we first learn the VQ-VAE with the prior distribution pθ(z|x) in isolation, which can enable the codebook to capture the latent semantics within inferences. Then, we train the evidence-aware decoder under the posterior distribution qφ(z|x, y). Training VQ-VAE To enable the codebook to capture the latent semantics within inferences, we train the VQ-VAE by reconstructing the inferential text y using the latent variable z. We use the pre-trained language model GPT-2 (Radford et al., 2019) as our decoder to generate the inference p(y|x, z), where the input is the sum of token embedding, position embedding and the latent variable z. To make reconstruction better conditioned on the latent variable, we replace each query in the multi-head self-attention mechanism with the sum of the latent variable and the query, as well for keys, values and hidden states on the top layer. We follow van den Oord et al. (2017) to learn the VQ-VAE by minimizing the loss function. lossrec = −logp(y|x, h(x,y) + sg[z −h(x,y)])+ ||sg[h(x,y)] −z||2 2 + β||h(x,y) −sg[z]||2 2 (9) where sg stands for the stop gradient operator that has zero partial derivatives during differentiation, and β is a hyperparameter which controls the speed to change the latent variable. We set the β as 0.25 in all experiments. The decoder optimizes the first loss term (reconstruction) only, the encoder optimizes the first and the last loss terms, and the codebook are updated by the middle loss term. We obtain the posterior distribution qφ(z|x, y) after optimizing the encoder and the codebook. Afterward, we learn the prior distribution estimator to infer the prior distribution pθ(z|x). Since the posterior distribution is categorical, we can calculate approximate prior distributions as follow in the training dataset D, where N(x) is the number of examples that includes the event x. p(z|x) =  (x,yi)∈D qφ(z|x, yi) N(x) (10) Therefore, we can fit the prior distributions by minimizing the KL divergence. lossprior = KL(p(z|x)||pθ(z|x)) (11) Training Evidence-Aware Decoder After training VQ-VAE, we jointly learn the context distribution ps(c|z) and the generator pm(y|x, c) by maximizing the following marginal likelihood under the posterior distribution qφ(z|x, y). logp(y|x) = Ez∼qφ[  c∈C logpm(y|x, c)ps(c|z)] (12) According to the Equation 2, the example (x, y) is mapped onto the nearest element z′ of the codebook under the posterior distribution qφ(z|x, y). Meanwhile, according to the Equation 5, the latent variable z′ is mapped onto the nearest element cz′ of retrieved evidence. Therefore, the objective in Equation 12 can be simplified as follow. logp(y|x) = logpm(y|x, cz′) + logps(cz′|z′) (13) Since the ground truth evidence for the example is unobserved, we cannot directly train the model by maximizing the marginal likelihood. To remedy this problem, we use reinforcement learning algorithm to optimize the objective. R = δ(pm(y|x, cz′) −pm(y|x, cr)) logp(y|x) = logpm(y|x, cz′) + Rlogps(cz′|z′) (14) where R is the reward designed to guide the model training, δ(x) is 1 if x is larger than 0 otherwise −1, and cr is a randomly selected evidence where cr ̸= cz′. The idea of designing the reward is that correct evidence should increase the probability of the gold inference compared with other evidence. Note that there is no real gradient defined for ps(c|z), instead, we approximate the gradient similar to the straightthrough estimator (Bengio et al., 2013). logp(y|x) = logpm(y|x, cz′) −R||hcz′ −z′||2 2 (15) 6123 Methods xIntent xNeed xAttr xEffect xReact xWant oEffect oReact oWant Overall Single Task S2S 8.17 12.35 2.96 5.26 3.43 13.44 6.42 4.09 7.08 7.02 VRNMT 9.52 13.35 4.87 4.42 7.64 9.80 13.71 5.28 10.79 8.82 CWVAE 12.12 15.67 5.63 14.64 8.13 15.01 11.63 8.58 13.83 11.69 Multi Task S2S* 24.53 23.85 5.06 9.44 5.38 24.68 7.93 5.60 21.30 14.20 COMET* 25.82 25.54 5.39 10.39 5.36 26.41 8.43 5.65 21.96 15.00 COMET 15.10 EA-VQ-VAE 26.89 25.95 5.72 10.96 5.68 25.94 8.78 6.10 22.48 15.40 Table 1: BLEU score on nine inference dimensions of the ATOMIC test dataset with different approaches. For inference dimensions, “x” and “o” refers to PersonX and others, respectively (e.g. “xAttr”: attribute of PersonX, “oEffect”: effect on others). The tag (*) means re-implementation. Thus, we can optimize the evidence-aware decoder by maximizing the marginal likelihood in the Equation 15. Please see more details about the model hyperparameters in Appendix B. 4 Experiment 4.1 Model Comparisons Following Sap et al. (2019), we first use the average BLEU-2 score between each sequence in the top 10 predictions and the gold generations to evaluate the accuracy of generations. We report the result of existing methods on ATOMIC and Event2Mind datasets in the Table 1 and Table 2, respectively. Methods xIntent xReact oReact Overall Single Task S2S 2.75 2.11 5.18 3.35 VRNMT 4.81 3.94 6.61 4.03 CWVAE 12.98 5.65 6.97 8.53 Multi Task S2S* 19.18 4.81 4.29 9.43 COMET* 21.64 5.10 4.36 10.37 EA-VQ-VAE 23.39 5.74 4.81 11.31 Table 2: BLEU score on three inference dimensions of the Event2Mind test dataset with different approaches. For inference dimensions, “x” and “o” refers to PersonX and others, respectively. The tag (*) means reimplementation. These approaches are divided into two groups. The first group trains distinct models for each inference dimension separately, while the second group trains a model in a multi-task learning way for all inference dimensions. S2S is a RNN-based sequence-to-sequence model (Sutskever et al., 2014). VRNMT (Su et al., 2018) introduces a sequence of recurrent latent variables to model the semantic distribution of inferences. CWVAE propose a context-aware variational autoencoder (Du et al., 2019) to acquire context information, which is first pre-trained on the auxiliary dataset and then fine-tuned for each inference dimension. COMET (Bosselut et al., 2019) concatenate the event with an inference dimension as the input and fine-tune the pre-trained GPT-2. Since COMET does not report the performance for each inference dimension, we re-implement the model for better comparison. Our approach is abbreviated as EA-VQ-VAE, short for Evidence-Aware Vector Quantised Variational AutoEncoder. As we can see in the Table 1 and Table 2, the multi-task learning performs better than single-task learning overall. Therefore, we train our model in a multi-task way and compare our approach with multi-task learning based methods. From the Table 1, we can see that our approach performs better on the majority of inference dimensions, achieving the state-of-the-art result on ATOMIC dataset. For the Event2Mind dataset, results in the Table 2 show that our approach brings a gain of 1% BLEU score overall compared with the state-of-the-art method. Methods Event2Mind ATOMIC dist-1 dist-2 dist-1 dist-2 S2S* 638 1,103 2,193 5,761 COMET* 1,794 4,461 3,629 12,826 EA-VQ-VAE 1,942 4,679 3,918 14,278 Table 3: The number of distinct n-gram (dist-1 and dist2) overall on Event2Mind and ATOMIC test dataset with different multi-task learning based methods. The tag (*) means re-implementation. Besides, in order to evaluate the diversity of generations, we use the number of distinct unigrams (dist-1) and bigrams (dist-2) as evaluation metrics (Li et al., 2015). Since we train our model in a multi-task way, we compare our approach with multi-task learning based methods for fair comparison. Results in the Table 3 show that our approach could increase the diversity of generations overall on both datasets. 6124 Since automatic evaluation of generated language is limited (Liu et al., 2016), we also perform a human evaluation on model performance. Following the setup of (Sap et al., 2019), we evaluate 100 randomly selected examples from the test set and use beam search to generate 10 candidates from different models. Five human experts are asked to identify whether a model generation is correct given an event with an inference dimension. Table 4 shows the result of the human evaluation on both datasets, where our approach achieves a gain of 1.5%∼2% accuracy compared with COMET. Methods Event2Mind ATOMIC S2S* 0.3901 0.5174 COMET* 0.4874 0.6379 EA-VQ-VAE 0.5072 0.6528 Table 4: Human score (accuracy) of generations on Event2Mind and ATOMIC test dataset. The tag (*) means re-implementation. 4.2 Model Analysis We conduct ablation analysis to better understand how various components in our approach impact overall performance. We remove evidence and VQVAE, respectively, to analyze their contribution. Methods xIntent xReact oReact Overall EA-VQ-VAE 23.37 5.83 4.87 11.32 - w/o evidence 21.69 5.36 4.48 10.51 - w/o VQ-VAE 21.87 5.41 4.60 10.63 - w/o SL 21.95 5.54 4.57 10.69 Table 5: BLEU score on the Event2Mind dev dataset with different approaches. SL is short for separately learning. Table 5 shows that the overall performance drops from 11.3% to 10.5% on Event2Mind dev dataset when removing the evidence totally (w/o evidence), which reveals the importance of evidence for inferential texts generation. After ablating the VQ-VAE and selecting top-1 evidence as background (w/o VQ-VAE), we can see that the performance drops from 11.3% to 10.6%, which means VQ-VAE can automatically select relevant and useful evidence. In order to demonstrate the effectiveness of our learning method, we also train our model by joint learning (w/o SL). The overall BLEU score drops from 11.3% to 10.7%, which shows that our learning method can effectively train our model. We also study how the amount of evidence retrieved from the corpus impacts the performance. From Figure 4, we can see that overall BLEU score Figure 4: Overall performance with different number of retrieved evidence on Event2Mind dev dataset. increases as the number of retrieved evidence expands. This is consistent with our intuition that the performance of our approach is improved by expanding retrieved examples, since our approach can select relevant and useful evidence from more retrieved evidence. When the number of retrieved evidence is larger than 20, the overall performance does not improve. The main reason is that the quality and relevance of retrieved evidence decreases as the number of retrieved evidence expands. 4.3 Case Study We give a case study to illustrate the entire procedure of our approach. Figure 5 provides an example of the generations given an event “PresonX is away from home” on the “xIntent” dimension (i.e. “PersonX wants”). We first sample two latent variables from the codebook (i.e. z29 and z125) according to the prior distribution of VQ-VAE. We visualize the semantics of latent variables by displaying word cloud of examples that are under the same latent assignment. As we can see, z29 captures the positive semantics like “play” and “friend”, while z125 captures the negative semantics like “devastated” and “offended”. Then, two latent variables are respectively used to select relevant evidence as background knowledge. As we can see, the first latent variable selects an evidence about “playing”, which provides a clue for the model to generate texts such as “to have fun” and “to spend time with friends”. Another latent variable selects another evidence in a quarrel scene, which can help the model reason about “PersonX wants to be alone”. The case study shows that our approach not only equips the generation with an explicit control over the semantics of evidence but select relevant evi6125 Event Latent Variable and Visualization Selected Evidence Generation PersonX is away from home 瀡Rog playing away from home, is he?瀢 瀡you ... could say that. 瀢 瀡where are you going?瀢his voice is right behind me, buzzing intimately in my ear. I jump, and then hunch forward, away from him, away from his intense presence. to relax to have fun to take a break to spend time with friends to travel to be alone to be independent to be somewhere else   Figure 5: An examples of Event2Mind dataset on the xIntent dimension (i.e. “PersonX wants”). dence to guide the generation. Please find another case on other inference dimension on Appendix C. 4.4 Error Analysis We analyze 100 incorrectly predicted instances randomly selected from the ATOMIC dataset, and summary two main classes of errors. The first problem is that some examples cannot retrieve relevant evidence since the scale of text corpus is limited. We can leverage more sources like Wikipedia to retrieve evidence. Another cause of this problem is that term-based retrieval (e.g. BM25) calculates the matching score using words overlap and cannot capture semantics of sentences. For examples, the evidence“the lights began to shift away from the fire, like a line of fireflies” will be retrieved for the event “PersonX lights a fire” since of the high overlap, but the event does not occur in the evidence. This problem might be mitigated by using better semantic-based retrieval model. The second problem is that the model cannot effectively leverage selected evidence. Although the selected evidence is closely related to the event and the inference can be obtained from the evidence, the model still generate incorrect texts since lacking of supervised information. A potential direction to mitigate the problem is to annotate background knowledge of events in the training dataset. 5 Related Work 5.1 Event-Related Text Understanding Recently, event-related text understanding has attracted much attention (Chambers and Jurafsky, 2008; Segers et al., 2016; Wang et al., 2017; Li et al., 2018; Rashkin et al., 2018; Sap et al., 2019; Guo et al., 2020), which is crucial to artificial intelligence systems for automated commonsense reasoning. There are a variety of tasks that focus on event-related text understanding in different forms. Script (Schank and Abelson, 1977) uses a line to represent temporal and causal relations between events, and the task of script event prediction (Chambers and Jurafsky, 2008) requires models to predict the subsequent event given an event context. Previous works on the task are mainly based on event pairs (Chambers and Jurafsky, 2008; Granroth-Wilding and Clark, 2016), event chains (Wang et al., 2017), and event evolutionary graph (Li et al., 2018) to predict script event. In addition, our task relates to story ending prediction (Sharma et al., 2018; Mostafazadeh et al., 2016; Zellers et al., 2018). Mostafazadeh et al. (2016) introduce a dataset for story ending prediction, which requires models to choose the most sensible ending given a paragraph as context. In this work, we study inferential text generation proposed by Rashkin et al. (2018) and Sap et al. (2019), both of which focus on generating texts about causes and effects of events and mental states of event participants. 5.2 Variational Autoencoder Based Text Generation Natural Language Generation, also known as text generation (McKeown, 1992; Sutskever et al., 2011), has recently become popular in NLP community (Feng et al., 2018; Duan et al., 2020). Recently, Variational Autoencoder (VAE) (Kingma and Welling, 2013) has achieved promising performance on various text generation tasks, including machine translation (Zhang et al., 2016; Su et al., 2018), text summarization (Miao and Blunsom, 2016; Li et al., 2017), and dialogue generation (Serban et al., 2017; Zhao et al., 2017). For machine translation, Zhang et al. (2016) and Su et al. (2018) introduce a continuous latent variable to explicitly model the semantics of a source sentence, which is used to guide the translation. In dialogue genration, Serban et al. (2017) apply a latent variable hierarchical encoder-decoder model to facilitate longer response, while Zhao et al. (2017) uses latent vari6126 ables to capture potential conversational intents and generates diverse responses. A recent work CWVAE (Du et al., 2019) on event-centered If-Then reasoning is the most related to our work, which introduces an additional context-aware latent variable to implicitly guide the generation by a two-stage training procedure. Different with previous works, we introduce a discrete latent variable to capture underlying semantics within inferences based on VQVAE that does not suffer from “posterior collapse” issues (van den Oord et al., 2017). These discrete latent variables are used to selectively leverage evidence as background knowledge to explicitly guide the generation. Besides, our approach provides a way to uncover the rationale of a generation to some extent through tracing back the evidence that supports the generation and the selected discrete latent variable. 6 Conclusion In this paper, we present an evidence-aware generative model based on VQ-VAE, which utilizes discrete semantic latent variables to select evidence as background knowledge to guide the generation. Experimental results show that our approach achieves state-of-the-art performance on Event2Mind and ATOMIC datasets. Further analysis shows that our approach selectively uses evidence to generate different inferential texts from multiple perspectives. Acknowledgments Daya Guo and Jian Yin are supported by the National Natural Science Foundation of China (U1711262, U1611264, U1711261, U1811261, U1811264, U1911203), National Key R&D Program of China (2018YFB1004404), Guangdong Basic and Applied Basic Research Foundation (2019B1515130001), Key R&D Program of Guangdong Province (2018B010107005). Jian Yin is the corresponding author. References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli elikyilmaz, and Yejin Choi. 2019. Comet: Commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL). Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised learning of narrative event chains. In Proceedings of ACL-08: HLT, pages 789–797. Li Du, Xiao Ding, Ting Liu, and Zhongyang Li. 2019. Modeling event background for if-then commonsense reasoning using context-aware variational autoencoder. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2682–2691. Yu Duan, Canwen Xu, Jiaxin Pei, Jialong Han, and Chenliang Li. 2020. Pre-train and plug-in: Flexible conditional text generation with variational autoencoders. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Xiaocheng Feng, Ming Liu, Jiahao Liu, Bing Qin, Yibo Sun, and Ting Liu. 2018. Topic-to-essay generation with neural networks. In IJCAI, pages 4078–4084. Mark Granroth-Wilding and Stephen Clark. 2016. What happens next? event prediction using a compositional neural network model. In Thirtieth AAAI Conference on Artificial Intelligence. Daya Guo, Akari Asai, Duyu Tang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, Jian Yin, and Ming Zhou. 2020. Inferential text generation with multiple knowledge sources and meta-learning. arXiv preprint arXiv:2004.03070. Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objective function for neural conversation models. arXiv preprint arXiv:1510.03055. Piji Li, Wai Lam, Lidong Bing, and Zihao Wang. 2017. Deep recurrent generative decoder for abstractive text summarization. arXiv preprint arXiv:1708.00625. Zhongyang Li, Xiao Ding, and Ting Liu. 2018. Constructing narrative event evolutionary graph for script event prediction. arXiv preprint arXiv:1805.05081. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122–2132, Austin, Texas. Association for Computational Linguistics. 6127 Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Kathleen McKeown. 1992. Text generation. Cambridge University Press. Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sentence compression. arXiv preprint arXiv:1609.07317. Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A corpus and evaluation framework for deeper understanding of commonsense stories. arXiv preprint arXiv:1604.01696. Aaron van den Oord, Oriol Vinyals, et al. 2017. Neural discrete representation learning. In Advances in Neural Information Processing Systems, pages 6306–6315. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A Smith, and Yejin Choi. 2018. Event2mind: Commonsense inference on events, intents, and reactions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for ifthen reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3027–3035. Roger C Schank and Robert P Abelson. 1977. Scripts, plans, goals, and understanding: An inquiry into human knowledge structures (artificial intelligence series). Retrieved from. Roxane Segers, Marco Rospocher, Piek Vossen, Egoitz Laparra, German Rigau, and Anne-Lyse Minard. 2016. The event and implied situation ontology (eso): Application and evaluation. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 1463– 1470. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Thirty-First AAAI Conference on Artificial Intelligence. Rishi Sharma, James Allen, Omid Bakhshandeh, and Nasrin Mostafazadeh. 2018. Tackling the story ending biases in the story cloze test. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 752–757. Jinsong Su, Shan Wu, Deyi Xiong, Yaojie Lu, Xianpei Han, and Biao Zhang. 2018. Variational recurrent neural machine translation. In Thirty-Second AAAI Conference on Artificial Intelligence. I Sutskever, O Vinyals, and QV Le. 2014. Sequence to sequence learning with neural networks. Advances in NIPS. Ilya Sutskever, James Martens, and Geoffrey E Hinton. 2011. Generating text with recurrent neural networks. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 1017–1024. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Zhongqing Wang, Yue Zhang, and Ching-Yun Chang. 2017. Integrating order information and event relation for script event prediction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 57–67. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326. Biao Zhang, Deyi Xiong, Jinsong Su, Hong Duan, and Min Zhang. 2016. Variational neural machine translation. arXiv preprint arXiv:1605.07869. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. arXiv preprint arXiv:1703.10960. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pages 19– 27. A Dataset Details We show examples of Event2Mind (Rashkin et al., 2018) and ATOMIC (Sap et al., 2019) dataset in Table 6 and Table 7, respectively. The task aims to generate multiple inferential texts given an event with an inference dimension. Table 8 6128 Event Inference dim Description Target PersonX runs away from home xIntent because PersonX wanted to to leave his home, to be independent, be away from a parent xReact as a result, PersonX feels lonely, nervous, regretful oReact as a result, others feel sad, angry, worried Table 6: Examples of Event2Mind dataset, including three inference dimensions. For inference dimensions, “x” and “o” refers to PersonX and others, respectively (e.g. description of “xIntent”: Because PersonX wants). Event Inference dim Description Target PersonX visits friends xIntent because PersonX wanted to to enjoy their time, to catch up with them xNeed before that, PersonX needed to to go to their location, to call them xAttr PersonX is seen as friendly, sociable xEffect has an effect on PersonX have a nice party, have good dinner xWant as a result, PersonX wants have fun, enjoy and spend time xReact as a result, PersonX feels happy, comfortable oReact as a result, others feel happy, pleased oWant as a result, others want to wind down, to clean their home oEffect has an effect on others make the relation stronger, bring a guest into their home Table 7: Examples of ATOMIC dataset, including nine inference dimensions. For inference dimensions, “x” and “o” refers to PersonX and others, respectively (e.g. description of “xIntent”: Because PersonX wants).. lists statistics of Event2Mind and ATOMIC dataset. Both datasets contain about 25,000 unique events (# unique events) extracted multiple data sources, where the events has 5 words on average (# average words of events). Event2Mind focuses on three inference dimensions shown in Table 6 and contains about 2.6 inferences on average, while ATOMIC focuses on nine inference dimensions shown in Table 7 and contains about 3.6 inferences on average. Beside, we list the number of distinct unigram (# dist-1 of inferences) and bigram (# dist-2 of inferences) to evaluate the diversity of inferences. B Model Training The text corpus is built upon BooksCorpus (Zhu et al., 2015). We extract about 24.2M paragraphs from the corpus, where a paragraph has about 50 words. We retrieve 45 evidence from the corpus for all experiments. We initialize GPT-2 with 12 layers, 768 dimensional hidden states and 12 attention heads using the original pre-trained weights (Radford et al., 2019). For VQ-VAE, the codebook is composed of 400 discrete latent variables and the dimension of latent variable is 768. We set the max length of evidence, events and inferences as 64, 64, and 32, respectively. Model parameters except GPT-2 are initialized with uniform distribution. We use the Adam optimizer to update model parameters. The learning rate and the batch size is set as 5e-5 and 64, respectively. In the multi-task learning way, we concatenate events and special tokens of inference dimensions as the input to guide the generation in different dimension. We tune hyperparameters and perform early stopping on the development set. C Additional Case Study Figure 6 provides an example of the generations given an event “PerxonX dreams last night” on the “xReact” dimension (i.e. “PersonX feels”). We first sample two latent variables from the codebook (i.e. z330 and z371) according to the prior distribution of VQ-VAE (van den Oord et al., 2017). We visualize the semantics of latent variables by 6129 Dataset # inference dimension # unique events # average words of events # inferences per example # dist-1 of inferences # dist-2 of inferences Event2Mind 3 24716 5.1 2.6 10,929 52,830 ATOMIC 9 24313 5.2 3.6 27,169 20,5659 Table 8: Statistic of Event2Mind and ATOMIC Dataset. PersonX dreams last night I had the strangest dreams last night as a result and not a single nightmare. What kind of dreams? Harmony asked gently. Its all fading now, aria frowned. There seemed to be a ton of singing in it though. I wanted ... that night she cried herself to sleep ... for the first time , if not the last. Even in her dreams she found no peace. excited happy satisfied good scared nervous anxious worried    Event Latent Variable and Visualization Selected Evidence Generation Figure 6: An examples of Event2Mind dataset on the xReact dimension (i.e. “PersonX feels”). displaying word cloud of examples that are under the same latent assignment. As we can see, z330 captures the positive semantics like “excitied” and “friend”, while z371 captures the negative semantics like “scared” and “noise”. Then, two latent variables are respectively used to select relevant evidence as background knowledge. As we can see, the first latent variable selects an evidence about a sweet dream “There seems to be a ton of singing in it though”, which provides a clue for the model to generate positive emotion such as “excited” and “happy”. Another latent variable select another evidence in a nightmare “Even in her dreams she found no peace”, which can help the model reason about the emotion of “PersonX” such as “scared” and “nervous”.
2020
544
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6130–6140 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6130 How to Ask Good Questions? Try to Leverage Paraphrases Xin Jia, Wenjie Zhou, Xu Sun, Yunfang Wu∗ MOE Key Lab of Computational Linguistics, School of EECS, Peking University {jemmryx, wjzhou013, xusun, wuyf}@pku.edu.cn Abstract Given a sentence and its relevant answer, how to ask good questions is a challenging task, which has many real applications. Inspired by human’s paraphrasing capability to ask questions of the same meaning but with diverse expressions, we propose to incorporate paraphrase knowledge into question generation(QG) to generate human-like questions. Specifically, we present a two-hand hybrid model leveraging a self-built paraphrase resource, which is automatically conducted by a simple back-translation method. On the one hand, we conduct multi-task learning with sentence-level paraphrase generation (PG) as an auxiliary task to supplement paraphrase knowledge to the task-share encoder. On the other hand, we adopt a new loss function for diversity training to introduce more question patterns to QG. Extensive experimental results show that our proposed model obtains obvious performance gain over several strong baselines, and further human evaluation validates that our model can ask questions of high quality by leveraging paraphrase knowledge. 1 Introduction Question generation (QG) is an essential task for NLP, which focuses on generating grammatical questions for given paragraphs or sentences. It plays a vital role in various realistic scenarios. For educational purposes, QG can create reading comprehension materials for language learners (Heilman and Smith, 2010). For business use, QG can bring benefits to conversation systems and chat-bots for effective communication with humans (Mostafazadeh et al., 2016). Besides, automatically-generated questions can be conversely used for constructing question answering datasets to enhance reading comprehension sys∗Corresponding author. Sentence: the next three drives of the game would end in punts. Answer: punts Reference question: what did the next three drives result in? Question generated by the baseline model: the next three drives of the game would end in what? Sentence: in ring theory, the notion of number is generally replaced with that of ideal. Answer: ring theory Reference question: in what theory is the idea of a number exchanged with that of an ideal? Question generated by the baseline model: in what theory is the notion of number replaced with that of ideal? Table 1: Real examples of generated questions from SQuAD. We highlight the paraphrase transitions between sentences and questions. Human creates good questions by leveraging paraphrase knowledge, while the automatically generated questions just copy the original sentence, resulting in lower evaluation scores. tems (Tang et al., 2017; Duan et al., 2017; Xu et al., 2019; Zhang and Bansal, 2019). Recent neural network-based methods have achieved promising results on QG, most of which are based on the seq2seq attention framework (Du et al., 2017; Zhou et al., 2017; Gao et al., 2018; Kim et al., 2018; Zhou et al., 2019b), enriched with lexical features (Zhou et al., 2017; Sun et al., 2018; Song et al., 2018) or enhanced by copy mechanism (Du and Cardie, 2018; Sun et al., 2018; Zhou et al., 2019a). Although much progress has been made for QG, existing approaches do not explicitly model the “notorious” lexical and syntactic gaps in the generation process. That is, some parts of two texts (e.g. the input sentence and reference question, the reference question and generated question) may convey the same meaning but use different words, phrases or syntactic patterns. In real communica6131 Figure 1: A sketch of our design to leverage paraphrase knowledge in QG. tion, humans often paraphrase a source sentence to ask questions which are grammatical and coherent. Take SQuAD (Rajpurkar et al., 2016) as an example, which is a popular reading comprehension dataset and has been widely used for QG, there is a large percentage of questions created by paraphrasing (33.3% of the questions contain synonymy variations and 64% of questions contain syntactic variations (Rajpurkar et al., 2016)). Two examples are shown in Table 1. Due to the lack of paraphrase knowledge, the generated questions simply copy certain words from the input sequence, the quality of which is thus not competitive with human-created questions. To address this issue, we introduce paraphrase knowledge in the QG process to generate humanlike questions. The sketch of our design is illustrated in Figure 1. To make our model easy to implement and train the model in an end-to-end fashion, we do not use any extra paraphrase generation (PG) dataset but just use a simple backtranslation method to automatically create paraphrases for both the input sentences and reference questions. Based on the high-quality expanded data, we propose a two-hand hybrid model. On the left hand, using the expanded sentence paraphrase as the target of PG, we perform multi-task learning with PG and QG, to optimize the task-share encoder with the paraphrase knowledge. On the right hand, with the gold reference question and question paraphrase as QG’s multi-targets, we adopt a new min-loss function, to enable the QG module to learn more diverse question patterns. We conduct extensive experiments on SQuAD and MARCO (Nguyen et al., 2016). Results show that both separate modules, the PG auxiliary task and the min-loss function, obviously improve the performances of QG task, and combing them achieves further improvements. Furthermore, human evaluation results show that our hybrid model can ask better and more human-like questions by incorporating paraphrase knowledge. 2 Related Work For current mainstream neural network-based methods on QG, most approaches utilize the Seq2Seq model with attention mechanism (Du et al., 2017; Zhou et al., 2017; Zhao et al., 2018b; Zhou et al., 2019a). To obtain better representations of the input sequence and answer, the answer position and token lexical features are treated as supplements for the neural encoder (Zhou et al., 2017; Song et al., 2018; Kim et al., 2018). Similar to other text generation tasks, many works on QG also employ copy or pointer mechanism to overcome the OOV problem (Du and Cardie, 2018; Sun et al., 2018; Zhang and Bansal, 2019). Recently, Zhou et al. (2019a) employ language modeling (LM) as an auxiliary task to enrich the encoder representations. In this paper, we adopt this work as one of the baseline models, since their universal model is easy to implement and achieves promising results for QG. In order to make use of the context information of paragraphs, Zhao et al. (2018b) propose a gated self-attention network to encode context passage. Based on this, Zhang and Bansal (2019) apply reinforcement learning to deal with semantic drift in QG; Nema et al. (2019) use a passage-answer fusion mechanism to obtain answer-focused context representations; Li et al. (2019a) utilize gated attention to fuse answer-relevant relation with context sentence. Besides, Chen et al. (2019) design different passage graphs to capture structure information of passage through graph neural networks. Dong et al. (2019) propose a unified language model pre-training method to obtain better context representations for QG. All these works adopt a whole paragraph as input to generate questions. Different from this, our work only takes a sentence as input and leaves paragraph-level QG for future research. Paraphrase generation is also a challenging task for NLP. Recent works usually obtain paraphrases by reordering or modifying the syntax or lexicon based on some paraphrase databases and rules (Fader et al., 2013; Chen et al., 2016), or by employing some neural generation methods (Prakash et al., 2016; Li et al., 2019b). In this paper, we employ a simple and effective paraphrasing method to expand both input sentences and reference questions. Our method also can be replaced with more sophisticated paraphrasing methods. Paraphrase knowledge has been used to improve many NLP tasks, such as machine translation, ques6132 tion answering, and text simplification. CallisonBurch et al. (2006) use paraphrase techniques to deal with unknown phrases to improve statistical machine translation. Fader et al. (2013) and Dong et al. (2017) employ paraphrase knowledge to enhance question answering models. Kriz et al. (2018) utilize paraphrase and context-based lexical substitution knowledge to improve simplification task. Similarly, Zhao et al. (2018a) combine paraphrase rules of PPDB (Ganitkevitch et al., 2013) with Transformer (Vaswani et al., 2017) to perform sentence simplification task. Guo et al. (2018a) propose a multi-task learning framework with PG and simplification. In addition, Yu et al. (2018) and Xie et al. (2019) use paraphrase as data argumentation for their primary tasks. Different from these works, we leverage paraphrase knowledge for question generation, by automatically constructing a built-in paraphrase corpus without using any external paraphrase knowledge bases. 3 Model Description In this section, we first describe two baseline models we used: feature-enriched pointer-generator and language modeling enhanced QG. Then we explain how to obtain paraphrase resources and show the quality statistics. Furthermore, we describe in detail two modules of utilizing paraphrase knowledge: the PG auxiliary task and the min loss function, as well as their combination. The overall structure of our hybrid model is shown in Figure 2. 3.1 Baseline Models 3.1.1 Feature-enriched Pointer-generator Sun et al. (2018) enhance pointer-generator (See et al., 2017) model with rich features proposed by Zhou et al. (2017). They adopt a bidirectional LSTM as the encoder, which takes the featureenriched embedding ei as input: ei = [wi; ai; ni; pi; ui] (1) where wi, ai, ni, pi, ui respectively represents embeddings of word, answer position, name entity, POS and word case. Same as the decoder used by See et al. (2017), another unidirectional LSTM with attention mechanism is used to obtain the decoder hidden state st and context vector ct. Based on these, the pointergenerator model will simultaneously calculate the probabilities of generating a word from vocabulary and copying a word from the source text. The final probability distribution is the combination of these two modes with a generation probability pg: P(w) = pgPvocab + (1 −pg)Pcopy (2) The training objective is to minimize the negative log likelihood of the target sequence q: Lqg = −1 Tqg Tqg X t=1 logP(yqg t = qt) (3) 3.1.2 Language Modeling Enhanced QG Zhou et al. (2019a) enhance QG with language modeling under a hierarchical structure of multitask learning. The language modeling aims at predicting the next and previous words in the input sequence with forward and backward LSTMs, respectively, which serves as a low-level task to provide semantic information for the high-level QG task. In general, the input sequence will firstly be fed into the language modeling module to get the semantic hidden states, then these states will be concatenated with the input sequence to obtain the input of the feature-rich encoder: ei = [wi; ai; ni; pi; ui; hlm i ] (4) where hlm i is the semantic hidden state of LM module. The loss function of language modeling is defined as: Llm = − 1 Tlm −1 Tlm−1 X t=1 log(P lm(wt+1|w<t+1)) − 1 Tlm −1 Tlm X t=2 log(P lm(wt−1|w>t−1)) (5) where P lm(wt+1|w<t+1) and P lm(wt−1|w>t−1) represent the generation probabilities of the next word and the previous word, respectively. As a result, the total loss of language modeling enhanced QG is formulated as: Llqg = Lqg + βLlm (6) where β is a hyper-parameter to control the relative importance between language modeling and QG. Follow the work of Zhou et al. (2019a), we set β to 0.6. We re-implement this unified model to base our method on a strong baseline. 6133 3.2 Paraphrase Expansion The paraphrasing strategy is independent of the neural-based QG model, and we can use any advanced methods to generate paraphrases. In our work, we employ a simple back-translation method to automatically create paraphrases of both sentences and questions. Specially, we use a mature translation tool Google Translate, which is a free and accessible online service. We translate an original text into German and then back to English to get its paraphrase. As a result, we obtain s′ which is the paraphrase of the input sentence s, and q′ which is the paraphrase of the golden reference question q. In the following section, we will illustrate the way to use (s, s′) as a training pair of the auxiliary PG task, and adopt (q, q′) as multireferences to conduct the diversity training module. The way we expand paraphrases does not need extra PG datasets. Besides, it guarantees the PG and QG tasks share the same input s, so we can optimize their sharing encoder simultaneously and train the model end-to-end. Synonym Syntactic Fluency sentence-paraphrase 74% 7% 67% question-paraphrase 58% 44% 67% Table 2: Human evaluation of expanded paraphrases. To assess the quality of expanded paraphrases, we randomly select 100 paraphrases respectively from sentences and questions, and ask two annotators to judge the Synonym conversions and Syntactic transitions, as well as the paraphrase Fluency. As shown in Table 2, 74% sentence paraphrases and 58% question paraphrases have synonym conversions with source sequences, 7% and 44% of them have sentence pattern transitions. Besides, 67% of paraphrases have no grammar errors. Two real expansion examples are shown in Table 3. It indicates that our expansion method introduces rich and high quality paraphrasing knowledge into the original data. 3.3 Multi-task Learning with Paraphrase Generation 3.3.1 Auxiliary PG Task The multi-task learning mechanism with PG aims at introducing paraphrase knowledge into QG. In general, we employ a parallel architecture to combine PG and QG, where QG is the main task and PG serves as an auxiliary task. To make our model Input Sentence: the current basilica of the sacred heart is located on the spot of fr. Sentence Paraphrase: the present basilica of the sacred heart is located in the place of fr. Input Question: what structure is found on the location of the original church of father sorin at notre dame? Question Paraphrase: what structure can be found at the location of the original church of father sorin at notre dame? Table 3: Real examples of our paraphrase expansion on the sentences and reference questions respectively. We mark paraphrase transitions with color. easy to implement and can be trained end-to-end, we conduct the multi-task learning in a simultaneous mode. In detail, feature-riched embeddings will first be encoded by the task-share encoder and then be fed into PG and QG decoders respectively. The PG and QG decoders both have two layers and they are identical in the structure but different in parameters. In the auxiliary PG task, the input is the original sentence s, and the training objective is to minimize the cross-entropy loss: Lpg = −1 Tpg Tpg X t=1 logP(ypg t = s′t) (7) where ypg t is the generated word of PG at time step t and s′t is the t th word in the expanded sentence paraphrase s′ . 3.3.2 Soft Sharing Strategy To enhance the impact of auxiliary PG task so that the paraphrase knowledge can be absorbed by the question generation process more deeply, we employ a soft sharing strategy between the first layer of PG and QG decoders. The soft sharing strategy loosely couples parameters and encourages them close to each other in representation space. Following the work of Guo et al. (2018b), we minimize the l2 distance between the shared layer of QG and PG decoders as a regularization. The soft sharing loss is defined as: Lsf = X d∈D ||θd −φd||2 (8) where D is the set of shared decoder parameters, θ and φ respectively represent the parameters of the main QG task and the auxiliary PG task. 6134 Figure 2: Illustration of our proposed hybrid model. 3.4 Diversity Training with Min-loss Function For the QG task, a general training goal is to fit the decoded results with the reference questions. To provide more generation patterns, we adjust the training target from one golden reference question to several reference questions by using expanded paraphrase resources. We adopt a min-loss function among several references, and the loss function defined by Equation 3 can be rewritten as: Lqg = min q∈Q(−1 Tqg Tqg X t=1 logP(yqg t = qt)) (9) where Q is the set of gold reference question and expanded question paraphrase {q, q′}. Each generated question will separately calculate the negative log-likelihood of its multiple references, and the final loss is the minimum of them. Under this training process, our model can learn multiple question expressions which are not in the original training dataset, so that the generation can be more diverse. Besides, inspired by the work of Kovaleva et al. (2018), we have tried several loss strategies, such as minimum loss, maximum loss, and weighted loss to guide the diversity training. Among them, the minimum is the best performing strategy. By employing minimum strategy, the QG decoder fits the generated question with the most similar sequence among gold reference question and question paraphrase. In this way, more question patterns are introduced into QG process. 3.5 Hybrid Model Combining the above modules, we get our hybrid model. During training, the feature-enriched inputs are first encoded by the task-share encoder. Then the semantic hidden states are fed into PG decoder and QG decoder, respectively. For PG decoder, it has one fitting target (expanded sentence paraphrase). For QG decoder, it calculates the cross-entropy loss with both the gold reference question and the question paraphrase and regards the minimum loss of them as the QG loss. The auxiliary PG task and diversity training strategy simultaneously optimize the question generation process. The combined training loss function can be defined as: Ltotal = Llqg + αLpg + λLsf (10) where α and λ are both hyper-parameters. We will describe the chosen of these hyper-parameters later. 4 Experimental Settings 4.1 Datasets Our experiments are based on two reading comprehension datasets: SQuAD (2016) and MARCO (2016). On SQuAD, since there are two different splits that are most often used, we conduct experiments on both two splits on sentence-level. For 6135 Zhou Split Du Split Previous Works (conference-year) B1 B2 B3 B4 MET B1 B2 B3 B4 MET s2s (ACL-2017) 43.09 25.96 17.50 12.28 16.62 NQG++ (NLPCC-2017) 13.29 M2S+cp (NAACL-2018) 13.91 13.98 18.77 A-P-Hybrid (EMNLP-2018) 43.02 28.14 20.51 15.64 s2sa-at-mp-gsa (EMNLP-2018) 44.51 29.07 21.06 15.82 19.67 43.47 28.23 20.40 15.32 19.29 ASs2s (AAAI-2019) 16.17 16.20 19.92 LM enhanced QG (EMNLP-2019) 42.80 28.43 21.08 16.23 Q-type (EMNLP-2019) 43.11 29.13 21.29 16.31 Sent-Relation (EMNLP-2019) 44.40 29.48 21.54 16.37 20.68 45.66 30.21 21.82 16.27 20.36 Our Models baseline-1 +Data augmentation 38.16 24.35 17.60 13.28 17.73 38.91 24.80 17.83 13.36 17.97 baseline-1 41.06 26.63 19.65 14.71 19.12 41.04 27.05 19.92 15.21 19.19 baseline-1 +Min 42.03 27.61 20.27 15.48 19.61 42.97 28.52 21.02 16.06 19.93 baseline-1 + PG 42.76 28.26 20.89 16.09 20.11 43.68 28.99 21.39 16.37 20.23 baseline-1 +Min+PG (hybrid model-1) 43.61 28.67 21.09 16.23 20.29 42.66 28.68 21.39 16.55 20.44 baseline-2 42.39 28.11 20.86 16.13 19.95 42.76 28.80 21.47 16.57 20.38 baseline-2 +Min 43.38 28.92 21.49 16.61 20.40 42.94 29.06 21.73 16.88 20.60 baseline-2 +PG 43.56 28.98 21.57 16.74 20.58 43.73 29.53 22.06 17.08 20.78 baseline-2 +Min+PG (hybrid model-2) 43.63 29.21 21.79 16.93 20.58 44.32 29.88 22.28 17.21 20.96 Table 4: Experimental results of our models on SQuAD comparing with previous works and different baselines. The results of previous works are copied from their original papers. Baseline-1 and Baseline-2 refer to Featureenriched Pointer-generator and LM enhanced QG respectively. Bn: BLEU-n, MET: METOER. Du Split (Du et al., 2017), we use the same settings with Li et al. (2019a) and there are 74689, 10427 and 11609 sentence-question-answer triples for training, validation and test respectively. For Zhou Split (Zhou et al., 2017), we use the data shared by Zhou et al. (2017) and there are 86,635, 8,965 and 8,964 triples correspondingly. On MARCO, there are 74,097, 4,539 and 4,539 sentence-answerquestion triples for train, development and test sets, respectively (Sun et al., 2018). We expand the datasets using the paraphrase expansion approach described in Section 3.2. After that, one sample of the expanded dataset is in the form of ((sentence, sentence paraphrase), (question, question paraphrase), answer). 4.2 Baselines and Metrics For fair comparison, we report the following recent works on sentence-level Du and Zhou Splits: s2s (Du et al., 2017): an attention-based seq2seq model. NQG++ (Zhou et al., 2017): a feature-enriched Seq2Seq model. M2S+cp (Song et al., 2018): uses different matching strategies to explicitly model the information between answer and context. A-P-Hybrid (Sun et al., 2018): generates an accurate interrogative word and focuses on important context words. s2s-a-ct-mp-gsa (Zhao et al., 2018b): employs a gated attention encoder and a maxout pointer decoder to deal with long text inputs. ASs2s (Kim et al., 2018): proposes an answerseparated Seq2Seq model by replacing the answer in the input sequence with some specific words. LM enhanced QG (Zhou et al., 2019a): treats language modeling as a low-level task to provide semantic representations for the high-level QG. Q-type (Zhou et al., 2019b): multi-task learning framework with question word prediction and QG. Sent-Relation (Li et al., 2019a): extracts answer-relevant relations in sentence and encodes both sentence and relations to capture answerfocused representations. We evaluate the performance of our models using BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014), which are widely used in previous works for QG. 4.3 Implementation Details We set the vocabulary as the most frequent 20,000 words. We use 300-dimensional GloVe word vectors as initialization of the word embeddings. Answer position and token lexical features are randomly initialized to 32-dimensional vectors through truncated normal distribution. The maximum lengths of input sequence and output sequence are 100 and 40, respectively. The hidden 6136 size of the encoder, decoder, and language modeling LSTMs are all 512. We use Adagrad optimization with learning rate 0.15 for training. The batch size is 32 and the beam search decoding size is 12. To alleviate the volatility of the training procedure, we get the average model of the 5 checkpoints closest to the best-trained model on development set. 5 Results and Analysis 5.1 Main Results The experimental results on two splits of SQuAD are shown in Table 4. In terms of BLEU-4 that is often regarded as the main evaluation metric for text generation, our hybrid model-2 yields the best results on both splits, with 16.93 on Zhou Split and 17.21 on Du Split. We achieve state-of-the-art results on Du Split for sentence-level QG. Especially for baseline-1, the performance gains of our model are more obvious. Our hybrid model1 outperforms baseline-1 by 1.52 points on Zhou Split and 1.34 points on Du Split, which are large margins for this challenging task. Even based on this weak baseline, our method also achieves the state-of-the-art, 16.55 BLEU-4 score on Du Split for sentence-level QG. The previous work of CGC-QG (Liu et al., 2019) obtains a 17.55 BLEU-4 score on Zhou Split. But their model relies on many heuristic rules and ad-hoc strategies. In their full model with clue prediction, they do graph convolutional network (GCN) operations on dependency trees, while our model does not use any hand-crafted rules and is lightweight without graphs and trees. We also conduct experiments on MARCO, and the results are shown in Table 5. Our hybrid models obtain obvious improvements over two baselines, achieving a state-of-the-art BLEU-4 score of 21.61. Specifically, SQuAD and MARCO are built in different ways. The questions in SQuAD are generated by crowd-workers, while questions in MARCO are sampled from real user queries. The experimental results on two datasets validate the generalization and robustness of our models. Effect of Multi-task Learning with PG Task As shown in Table 4, the auxiliary PG task brings consistent improvements over both baseline models. On Zhou Split, it increases baseline-1 by 1.38 points and baseline-2 by 0.61 respectively. On Du Split, it increases baseline-1 by 1.16 points and baseline-2 by 0.51 points respectively. The Previous Works BLEU-4 s2s(Du et al., 2017) 10.46 s2sa-at-mp-gsa(Zhao et al., 2018b) 16.02 A-P-Hybrid(Sun et al., 2018) 19.45 LM enhanced QG(Zhou et al., 2019a) 20.88 Q-type(Zhou et al., 2019b) 21.59 Our Models baseline-1 20.13 hybrid model-1 21.15 baseline-2 20.79 hybrid model-2 21.61 Table 5: Main results of our models on MARCO. reason is that the PG task provides abundant paraphrase knowledge into the model and allows the task-share encoder to learn more paraphrasing representations. Effect of Diversity Training with Min-loss Function From the results in Table 4, we can see the min-loss strategy improves performances over both baseline models. On Zhou Split, we get a 0.77 improvement over baseline-1 and 0.48 improvement over baseline-2, respectively. On Du Split, we get similar improvements. Effect of Data Augmentation A straightforward way to leverage paraphrase knowledge is data augmentation. To test whether it works by simply adding paraphrase data as external training data, we also conduct an experiment based on the question paraphrase resource. We add the (s, q′) pairs into the training dataset, where s represents the input sentence and q′ denotes the paraphrase of the golden reference. Under this setting, we double the training samples. Unfortunately, as shown in Table 4, the baseline-1 model yields much lower BLEU4 scores on both Zhou Split (13.28) and Du Split (13.36) with such data augmentation. The main reason is that for the same input sentence, there are two different training targets (q and q′), making the training process cannot easily converge. 5.2 Diversity Test To investigate whether the paraphrase knowledge introduces more diverse expressions, we conduct evaluations on the distinct metric (Li et al., 2016), which is calculated as the number of distinct unigrams (distinct-1) and bigrams (distinct-2) divided by the total number of the generated words. The experimental results are shown in Table 6. It shows that our hybrid models obtain obvious gains over baseline models on both distinct-1 and distinct-2 6137 metrics, validating that our models really generate more diverse questions with the help of paraphrase knowledge. Models distinct-1 distinct-2 baseline-1 9.49 39.48 hybrid model-1 9.75 41.97 baseline-2 9.81 41.14 hybrid model-2 9.98 42.43 Table 6: Results of the distinct metric on zhou split. 5.3 Ablation Study of Soft Sharing We also verify the effectiveness of the soft sharing mechanism by removing it from the full hybrid models. The results are displayed in Table 7. After removing the soft sharing mechanism, both of our models have varying degrees of performance degradation. It demonstrates that the soft sharing strategy enhances the influence of paraphrase knowledge on QG decoder. Models BLEU-4 METEOR hybrid model-1 16.23 20.29 w/o soft sharing 15.87 20.04 hybrid model-2 16.93 20.58 w/o soft sharing 16.32 20.34 Table 7: Ablation studies of soft sharing on Zhou Split. 5.4 Parameters Selection The soft sharing coefficient hyper-parameter λ is 1×10−6, intuitively chosen by balancing the crossentropy and regularization losses according to Guo et al. (2018b). The other hyper-parameter α which is to control the balance of QG and PG is tuned by grid search. We set α to different values to explore the best proportion of two tasks. The experimental results of different α are shown in Figure 3. Consequently, we set α to 0.3 for our hybrid model. Figure 3: The influence of α on BLEU-4 scores on development set of Zhou Split. 5.5 Human Evaluation To further assess the quality of generated questions, we perform human evaluation to compare our hybrid model-2 with the strong baseline of language modeling enhanced QG. We randomly select 100 samples from SQuAD (Zhou Split) and ask three annotators to score these generated questions according to three aspects: Fluency: which measures whether a question is grammatical and fluent; Relevancy: which measures whether the question is relevant to the input context; Answerability: which indicates whether the question can be answered by the given answer. The rating score is set to [0, 2]. The evaluation results are shown in Table 8. The Spearman correlation coefficients between annotators are high, which guarantees the validity of human evaluation. Our hybrid model receives higher scores on all three metrics, indicating that our generated questions have higher quality in different aspects. Models Fluency Relevancy Answerability baseline-2 1.785 1.535 1.134 hybrid model-2 1.874 1.682 1.333 Spearman 0.722 0.693 0.861 Table 8: Human evaluation results. 5.6 Case Study We list two examples of generated questions in Table 9. By introducing paraphrase knowledge into generation, the generated questions well capture the paraphrase transitions between contexts and references. Obviously, the questions generated by our hybrid model are more grammatical and coherent. 5.7 Different Paraphrasing Methods To further test the generalization of our proposed methods, we use other paraphrasing methods to construct the paraphrase dataset. PPDB: for each non-stop word and phrase, looking it up in PPDB (2013) and replacing it with its synonyms. NMT: another back-translation method using a pre-trained Transformer (2017) model. Mixed: expanding input sentences with Google Trans and expanding reference questions with PPDB. The results are shown in Table 10. Our hybrid model-2 still achieves excellent performances on both BLEU and METEOR. From the results, we 6138 Sentence: his lab was torn down in 1904, and its contents were sold two years later to satisfy a debt. Answer: torn down Reference Question: what happened to his lab? Baseline Model-2: what was [UNK] ’s lab? Hybrid Model-2: what happened to his lab in 1904? Sentence: newcastle has a horse racing course at gosforth park. Answer: gosforth park Reference Question: where is newcastle ’s horse racing course located? Baseline Model-2: where does newcastle have a horse racing course? Hybrid Model-2: where is newcastle ’s horse racing course located? Table 9: Examples of generated questions. Paraphrasing Methods BLEU-4 METEOR baseline-2 16.13 19.95 PPDB 16.65 20.57 NMT 16.76 20.44 Google Trans 16.93 20.58 Mixed 17.05 20.75 Table 10: Hybrid model-2 performances using different paraphrase expansion methods on SQuAD(Zhou Split). can observe that the Mixed paraphrase method even obtain better results than the mature Google Translate. It proves that our proposed architecture is effective across different paraphrasing methods and has potential for improvement. 6 Conclusion and Future Work In this paper, we propose a two-hand hybrid model leveraging paraphrase knowledge for QG. The experimental results of independent modules and hybrid models prove that our models are effective and transferable. Besides, human evaluation results demonstrate that the paraphrase knowledge benefits our model to ask more human-like questions of high quality. In the future, we will explore more diverse and advanced paraphrase expanding methods for both sentence and paragraph level QG. Moreover, we will apply our methods to other similar tasks, such as sentence simplification. Acknowledgments We thank Weikang Li and Minghua Zhang for their valuable comments and suggestions. This work is supported by the National Natural Science Foundation of China (61773026) and the Key Project of Natural Science Foundation of China (61936012). References Chris Callison-Burch, Philipp Koehn, and Miles Osborne. 2006. Improved statistical machine translation using paraphrases. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 17–24, New York City, USA. Association for Computational Linguistics. Bo Chen, Le Sun, Xianpei Han, and Bo An. 2016. Sentence rewriting for semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 766–777, Berlin, Germany. Association for Computational Linguistics. Yu Chen, Lingfei Wu, and Mohammed J. Zaki. 2019. Natural question generation with reinforcement learning based graph-to-sequence model. ArXiv, abs/1910.08832. Michael J. Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In WMT@ACL. Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to paraphrase for question answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 875–886, Copenhagen, Denmark. Association for Computational Linguistics. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In NeurIPS. Xinya Du and Claire Cardie. 2018. Harvesting paragraph-level question-answer pairs from Wikipedia. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1907–1917, Melbourne, Australia. Association for Computational Linguistics. Xinya Du, Junru Shao, and Claire Cardie. 2017. Learning to ask: Neural question generation for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1342–1352, Vancouver, Canada. Association for Computational Linguistics. 6139 Nan Duan, Duyu Tang, Peng Chen, and Ming Zhou. 2017. Question generation for question answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 866–874, Copenhagen, Denmark. Association for Computational Linguistics. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1608–1618, Sofia, Bulgaria. Association for Computational Linguistics. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 758–764, Atlanta, Georgia. Association for Computational Linguistics. Yifan Gao, Jianan Wang, Lidong Bing, Irwin King, and Michael R. Lyu. 2018. Difficulty controllable question generation for reading comprehension. ArXiv, abs/1807.03586. Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2018a. Dynamic multi-level multi-task learning for sentence simplification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 462–476, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2018b. Soft layer-specific multi-task summarization with entailment and question generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 687–697, Melbourne, Australia. Association for Computational Linguistics. Michael Heilman and Noah A. Smith. 2010. Good question! statistical ranking for question generation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 609–617, Los Angeles, California. Association for Computational Linguistics. Yanghoon Kim, Hwanhee Lee, Joongbo Shin, and Kyomin Jung. 2018. Improving neural question generation using answer separation. In AAAI. Olga Kovaleva, Anna Rumshisky, and Alexey Romanov. 2018. Similarity-based reconstruction loss for meaning representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4875–4880, Brussels, Belgium. Association for Computational Linguistics. Reno Kriz, Eleni Miltsakaki, Marianna Apidianaki, and Chris Callison-Burch. 2018. Simplification using paraphrases and context-based lexical substitution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 207–217, New Orleans, Louisiana. Association for Computational Linguistics. Jingjing Li, Yifan Gao, Lidong Bing, Irwin King, and Michael R. Lyu. 2019a. Improving question generation with to the point context. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3216–3226, Hong Kong, China. Association for Computational Linguistics. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics. Zichao Li, Xin Jiang, Lifeng Shang, and Qun Liu. 2019b. Decomposable neural paraphrase generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3403–3414, Florence, Italy. Association for Computational Linguistics. Bang Liu, Mingjun Zhao, Di Niu, Kunfeng Lai, Yancheng He, Haojie Wei, and Yu Xu. 2019. Learning to generate questions by learning what not to generate. ArXiv, abs/1902.10418. Nasrin Mostafazadeh, Ishan Misra, Jacob Devlin, Margaret Mitchell, Xiaodong He, and Lucy Vanderwende. 2016. Generating natural questions about an image. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1802–1813, Berlin, Germany. Association for Computational Linguistics. Preksha Nema, Akash Kumar Mohankumar, Mitesh M. Khapra, Balaji Vasan Srinivasan, and Balaraman Ravindran. 2019. Let’s ask again: Refine network for automatic question generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3314–3323, Hong Kong, China. Association for Computational Linguistics. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. ArXiv, abs/1611.09268. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, 6140 Pennsylvania, USA. Association for Computational Linguistics. Aaditya Prakash, Sadid A. Hasan, Kathy Lee, Vivek Datla, Ashequl Qadir, Joey Liu, and Oladimeji Farri. 2016. Neural paraphrase generation with stacked residual LSTM networks. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2923–2934, Osaka, Japan. The COLING 2016 Organizing Committee. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics. Linfeng Song, Zhiguo Wang, Wael Hamza, Yue Zhang, and Daniel Gildea. 2018. Leveraging context information for natural question generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 569–574, New Orleans, Louisiana. Association for Computational Linguistics. Xingwu Sun, Jing Liu, Yajuan Lyu, Wei He, Yanjun Ma, and Shi Wang. 2018. Answer-focused and position-aware neural question generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3930– 3939, Brussels, Belgium. Association for Computational Linguistics. Duyu Tang, Nan Duan, Tao Qin, and Ming Zhou. 2017. Question answering and question generation as dual tasks. ArXiv, abs/1706.02027. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Qizhe Xie, Zihang Dai, Eduard H. Hovy, Minh-Thang Luong, and Quoc V. Le. 2019. Unsupervised data augmentation. ArXiv, abs/1904.12848. Jingjing Xu, Yuechen Wang, Duyu Tang, Nan Duan, Pengcheng Yang, Qi Zeng, Ming Zhou, and Xu Sun. 2019. Asking clarification questions in knowledgebased question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1618–1629, Hong Kong, China. Association for Computational Linguistics. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V. Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. ArXiv, abs/1804.09541. Shiyue Zhang and Mohit Bansal. 2019. Addressing semantic drift in question generation for semisupervised question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2495–2509, Hong Kong, China. Association for Computational Linguistics. Sanqiang Zhao, Rui Meng, Daqing He, Andi Saptono, and Bambang Parmanto. 2018a. Integrating transformer and paraphrase rules for sentence simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3164–3173, Brussels, Belgium. Association for Computational Linguistics. Yao Zhao, Xiaochuan Ni, Yuanyuan Ding, and Qifa Ke. 2018b. Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3901–3910, Brussels, Belgium. Association for Computational Linguistics. Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural question generation from text: A preliminary study. In NLPCC. Wenjie Zhou, Minghua Zhang, and Yunfang Wu. 2019a. Multi-task learning with language modeling for question generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3394–3399, Hong Kong, China. Association for Computational Linguistics. Wenjie Zhou, Minghua Zhang, and Yunfang Wu. 2019b. Question-type driven question generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6032– 6037, Hong Kong, China. Association for Computational Linguistics.
2020
545
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6141–6151 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6141 NeuInfer: Knowledge Inference on N-ary Facts Saiping Guan, Xiaolong Jin, Jiafeng Guo, Yuanzhuo Wang, and Xueqi Cheng School of Computer Science and Technology, University of Chinese Academy of Sciences; CAS Key Laboratory of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences {guansaiping,jinxiaolong,guojiafeng,wangyuanzhuo,cxq}@ict.ac.cn Abstract Knowledge inference on knowledge graph has attracted extensive attention, which aims to find out connotative valid facts in knowledge graph and is very helpful for improving the performance of many downstream applications. However, researchers have mainly poured attention to knowledge inference on binary facts. The studies on n-ary facts are relatively scarcer, although they are also ubiquitous in the real world. Therefore, this paper addresses knowledge inference on n-ary facts. We represent each n-ary fact as a primary triple coupled with a set of its auxiliary descriptive attribute-value pair(s). We further propose a neural network model, NeuInfer, for knowledge inference on n-ary facts. Besides handling the common task to infer an unknown element in a whole fact, NeuInfer can cope with a new type of task, flexible knowledge inference. It aims to infer an unknown element in a partial fact consisting of the primary triple coupled with any number of its auxiliary description(s). Experimental results demonstrate the remarkable superiority of NeuInfer. 1 Introduction With the introduction of connotative valid facts, knowledge inference on knowledge graph improves the performance of many downstream applications, such as vertical search and question answering (Dong et al., 2015; Lukovnikov et al., 2017). Existing studies (Nickel et al., 2016; Wang et al., 2017) mainly focus on knowledge inference on binary facts with two entities connected with a certain binary relation, represented as triples, (head entity, relation, tail entity). They attempt to infer the unknown head/tail entity or the unknown relation of a given binary fact. However, n-ary facts involving more than two entities are also ubiquitous. For example, in Freebase, more than 1/3 entities participate in n-ary facts (Wen et al., 2016). The fact that John Bardeen received Nobel Prize in Physics in 1956 together with Walter Houser Brattain and William Shockley1 is a typical 5ary fact. So far, only a few studies (Wen et al., 2016; Zhang et al., 2018; Guan et al., 2019) have tried to address knowledge inference on n-ary facts. In existing studies for knowledge inference on nary facts, each n-ary fact is represented as a group of peer attributes and attribute values. In practice, for each n-ary fact, there is usually a primary triple (the main focus of the n-ary fact), and other attributes along with the corresponding attribute values are its auxiliary descriptions. Take the above 5-ary fact for example, the primary triple is (John Bardeen, award-received, Nobel Prize in Physics), and other attribute-value pairs including point-in-time : 1956, together-with : Walter Houser Brattain and together-with : William Shockley are its auxiliary descriptions. Actually, in YAGO (Suchanek et al., 2007) and Wikidata (Vrandeˇci´c and Kr¨otzsch, 2014), a primary triple is identified for each n-ary fact. The above 5-ary fact is a relatively complete example. In the real-world scenario, many n-ary facts appear as only partial ones, each consisting of a primary triple and a subset of its auxiliary description(s), due to incomplete knowledge acquisition. For example, (John Bardeen, awardreceived, Nobel Prize in Physics) with pointin-time : 1956 and it with {together-with : Walter Houser Brattain, together-with : William Shockley} are two typical partial facts corresponding to the above 5-ary fact. For differentiation, we call those relatively complete facts as whole ones. We noticed that existing studies on n-ary facts infer an unknown element in a welldefined whole fact and have not paid attention to knowledge inference on partial facts. Later on, we 1https://www.wikidata.org/wiki/Q949 6142 refer the former as simple knowledge inference, while the latter as flexible knowledge inference. With these considerations in mind, in this paper, by discriminating the information in the same n-ary fact, we propose a neural network model, called NeuInfer, to conduct both simple and flexible knowledge inference on n-ary facts. Our specific contributions are summarized as: • We treat the information in the same n-ary fact discriminatingly and represent each n-ary fact as a primary triple coupled with a set of its auxiliary descriptive attribute-value pair(s). • We propose a neural network model, NeuInfer, for knowledge inference on n-ary facts. NeuInfer can particularly handle the new type of task, flexible knowledge inference, which infers an unknown element in a partial fact consisting of a primary triple and any number of its auxiliary description(s). • Experimental results validate the significant effectiveness and superiority of NeuInfer. 2 Related Works 2.1 Knowledge Inference on Binary Facts They can be divided into tensor/matrix based methods, translation based methods, and neural network based ones. The quintessential one of tensor/matrix based methods is RESCAL (Nickel et al., 2011). It relates a knowledge graph to a three-way tensor of head entities, relations, and tail entities. The learned embeddings of entities and relations via minimizing the reconstruction error of the tensor are used to reconstruct the tensor. And binary facts corresponding to entries of large values are treated as valid. Similarly, ComplEx (Trouillon et al., 2016) relates each relation to a matrix of head and tail entities, which is decomposed and learned like RESCAL. To improve the embeddings and thus the performance of inference, researchers further introduce the constraints of entities and relations (Ding et al., 2018; Jain et al., 2018). Translation based methods date back to TransE (Bordes et al., 2013). It views each valid binary fact as the translation from the head entity to the tail entity via their relation. Thus, the score function indicating the validity of the fact is defined based on the similarity between the translation result and the tail entity. Then, a flurry of methods spring up (Wang et al., 2014; Lin et al., 2015b; Ji et al., 2015; Guo et al., 2015; Lin et al., 2015a; Xiao et al., 2016; Jia et al., 2016; Tay et al., 2017; Ebisu and Ichise, 2018; Chen et al., 2019). They modify the above translation assumption or introduce additional information and constraints. Among them, TransH (Wang et al., 2014) translates on relationspecific hyperplanes. Entities are projected into the hyperplanes of relations before translating. Neural network based methods model the validity of binary facts or the inference processes. For example, ConvKB (Nguyen et al., 2018) treats each binary fact as a three-column matrix. This matrix is fed into a convolution layer, followed by a concatenation layer and a fully-connected layer to generate a validity score. Nathani et al. (2019) further proposes a generalized graph attention model as the encoder to capture neighborhood features and applies ConvKB as the decoder. ConvE (Dettmers et al., 2018) models entity inference process via 2D convolution over the reshaped then concatenated embedding of the known entity and relation. ConvR (Jiang et al., 2019) further adaptively constructs convolution filters from relation embedding and applies these filters across entity embedding to generate convolutional features. SENN (Guan et al., 2018) models the inference processes of head entities, tail entities, and relations via fullyconnected neural networks, and integrates them into a unified framework. 2.2 Knowledge Inference on N-ary Facts As aforesaid, only a few studies handle this type of knowledge inference. The m-TransH method (Wen et al., 2016) defines n-ary relations as the mappings from the attribute sequences to the attribute values. Each n-ary fact is an instance of the corresponding n-ary relation. Then, m-TransH generalizes TransH (Wang et al., 2014) on binary facts to nary facts via attaching each n-ary relation with a hyperplane. RAE (Zhang et al., 2018) further introduces the likelihood that two attribute values co-participate in a common n-ary fact, and adds the corresponding relatedness loss multiplied by a weight factor to the embedding loss of m-TransH. Specifically, RAE applies a fully-connected neural network to model the above likelihood. Differently, NaLP (Guan et al., 2019) represents each n-ary fact as a set of attribute-value pairs directly. Then, convolution is adopted to get the embeddings of the attribute-value pairs, and a fully-connected neural 6143 network is applied to evaluate their relatedness and finally to obtain the validity score of the input n-ary fact. In these methods, the information in the same n-ary fact is equal-status. Actually, in each n-ary fact, a primary triple can usually be identified with other information as its auxiliary description(s), as exemplified in Section 1. Moreover, these methods are deliberately designed only for the inference on whole facts. They have not tackled any distinct inference task. In practice, the newly proposed flexible knowledge inference is also prevalent. 3 Problem Statement 3.1 The Representation of N-ary Facts Different from the studies that define n-ary relations first and then represent n-ary facts (Wen et al., 2016; Zhang et al., 2018), we represent each n-ary fact as a primary triple (head entity, relation, tail entity) coupled with a set of its auxiliary description(s) directly. Formally, given an n-ary fact Fct with the primary triple (h, r, t), m attributes and attribute values, its representation is: (h,r, t), { |−−a1 : v1, |−−a2 : v2, |−−. . . , |−−am : vm}  , where each ai :vi (i = 1, 2, . . . , m) is an attributevalue pair, also called an auxiliary description to the primary triple. An element of Fct refers to h/r/t/ai/vi; AFct = {a1, a2, . . . , am} is Fct’s attribute set and ai may be the same to aj (i, j = 1, 2, . . . , m, i ̸= j); VFct = {v1, v2, . . . , vm} is Fct’s attribute value set. For example, the representation of the 5-ary fact, mentioned in Section 1, is: (John Bardeen, award-received, Nobel Prize in Physics), { |−−point-in-time : 1956, |−−together-with : Walter Houser Brattain, |−−together-with : William Shockley}  . Note that, in the real world, there is a type of complicated cases, say, where more than two entities participate in the same n-ary fact with the same primary attribute. We follow Wikidata (Vrandeˇci´c and Kr¨otzsch, 2014) to view the cases from different aspects of different entities. Take the case that John Bardeen, Walter Houser Brattain, and William Shockley received Nobel Prize in Physics in 1956 for example, besides the above 5-ary fact from the view of John Bardeen, we get other two 5-ary facts from the views of Walter Houser Brattain2 and William Shockley3, respectively: (Walter Houser Brattain, award-received, Nobel Prize in Physics), { |−−point-in-time : 1956, |−−together-with : John Bardeen, |−−together-with : William Shockley}  . (William Shockley, award-received, Nobel Prize in Physics), { |−−point-in-time : 1956, |−−together-with : Walter Houser Brattain, |−−together-with : John Bardeen}  . 3.2 Task Statement In this paper, we handle both the common simple knowledge inference and the newly proposed flexible knowledge inference. Before giving their definitions under our representation form of n-ary facts, let us define whole fact and partial fact first. Definition 1 (Whole fact and partial fact). For the fact Fct, assume its set of auxiliary description(s) as Sd = {ai : vi|i = 1, 2, . . . , m}. Then a partial fact of Fct is: Fct′ = (h, r, t), S′ d  , where S′ d ⊂ Sd, i.e., S′ d is a subset of Sd. And we call Fct the whole fact to differentiate it from Fct′. Notably, whole fact and partial fact are relative concepts, and a whole fact is a relatively complete fact compared to its partial fact. In this paper, partial facts are introduced to imitate a typical openworld setting where different facts of the same type may have different numbers of attribute-value pair(s). Definition 2 (Simple knowledge inference). It aims to infer an unknown element in a whole fact. Definition 3 (Flexible knowledge inference). It aims to infer an unknown element in a partial fact. 4 The NeuInfer Method 4.1 The Framework of NeuInfer To conduct knowledge inference on n-ary facts, NeuInfer first models the validity of the n-ary facts and then casts inference as a classification task. 2https://www.wikidata.org/wiki/Q184577 3https://www.wikidata.org/wiki/Q163415 6144 hrt-FCNs … ………… … … … FCN1 Validity score …… hrtav-FCNs 𝑖: 1 →𝑚 FCN2 ⨁ Final score min Compatibility score … 𝑖: 1 →𝑚 … … 𝐚𝐢 𝐯𝐢 … … … … … ………………………… ……………… ……………… Concate … 𝐫 … 𝐭 𝐡 … Concate 𝐽𝑜ℎ𝑛 𝐵𝑎𝑟𝑑𝑒𝑒𝑛 𝑎𝑤𝑎𝑟𝑑−𝑟𝑒𝑐𝑒𝑖𝑣𝑒𝑑 𝑁𝑜𝑏𝑒𝑙 𝑃𝑟𝑖𝑧𝑒 𝑖𝑛 𝑃ℎ𝑦𝑠𝑖𝑐𝑠 𝑝𝑜𝑖𝑛𝑡−𝑖𝑛−𝑡𝑖𝑚𝑒 1956 𝑡𝑜𝑔𝑒𝑡ℎ𝑒𝑟−𝑤𝑖𝑡ℎ 𝑊𝑎𝑙𝑡𝑒𝑟 𝐻𝑜𝑢𝑠𝑒𝑟 𝐵𝑟𝑎𝑡𝑡𝑎𝑖𝑛 𝑡𝑜𝑔𝑒𝑡ℎ𝑒𝑟−𝑤𝑖𝑡ℎ 𝑊𝑖𝑙𝑙𝑖𝑎𝑚 𝑆ℎ𝑜𝑐𝑘𝑙𝑒𝑦 Figure 1: The framework of the proposed NeuInfer method. 4.1.1 The Motivation of NeuInfer How to estimate whether an n-ary fact is valid or not? Let us look into two typical examples of invalid n-ary facts: (John Bardeen, award-received, Turing Award), { |−−point-in-time : 1956, |−−together-with : Walter Houser Brattain, |−−together-with : William Shockley}  . (John Bardeen, award-received, Nobel Prize in Physics), { |−−point-in-time : 1956, |−−together-with : Walter Houser Brattain, |−−place-of-marriage : New Y ork City}  . In the above first n-ary fact, the primary triple is invalid. In the second one, some auxiliary description is incompatible with the primary triple. Therefore, we believe that a valid n-ary fact has two prerequisites. On the one hand, its primary triple should be valid. If the primary triple is invalid, attaching any number of attribute-value pairs to it does not make the resulting n-ary fact valid; on the other hand, since each auxiliary description presents a qualifier to the primary triple, it should be compatible with the primary triple. Even if the primary triple is basically valid, any incompatible attribute-value pair makes the n-ary fact invalid. Therefore, NeuInfer is designed to characterize these two aspects and thus consists of two components corresponding to the validity evaluation of the primary triple and the compatibility evaluation of the n-ary fact, respectively. 4.1.2 The Framework of NeuInfer The framework of NeuInfer is illustrated in Figure 1, with the 5-ary fact presented in Section 1 as an example. For an n-ary fact Fct, we look up the embeddings of its relation r and the attributes in AFct from the embedding matrix MR ∈R|R|×k of relations and attributes, where R is the set of all the relations and attributes, and k is the dimension of the latent vector space. The embeddings of h, t, and the attribute values in VFct are looked up from the embedding matrix ME ∈R|E|×k of entities and attribute values, where E is the set of all the entities and attribute values. In what follows, the embeddings are denoted with the same letters but in boldface by convention. As presented in Figure 1, these embeddings are fed into the validity evaluation component (the upper part of Figure 1) and the compatibility evaluation component (the bottom part of Figure 1) to compute the validity score of (h, r, t) and the compatibility score of Fct, respectively. These two scores are used to generate the final score of Fct by weighted sum ⊕and further compute the loss. Note that, following RAE (Zhang et al., 2018) and NaLP (Guan et al., 2019), we only apply fully-connected neural networks in NeuInfer. 4.2 Validity Evaluation This component estimates the validity of (h, r, t), including the acquisition of its interaction vector and the assessment of its validity, corresponding to “hrt-FCNs” and “FCN1” in Figure 1, respectively. Detailedly, the embeddings of h, r, and t are 6145 concatenated and fed into a fully-connected neural network. After layer-by-layer learning, the last layer outputs the interaction vector ohrt of (h, r, t): ohrt =f(f(· · ·f(f([h; r; t]W1,1 + b1,1)· W1,2 + b1,2) · · · )W1,n1 + b1,n1), (1) where f(·) is the ReLU function; n1 is the number of the neural network layers; {W1,1, W1,2, . . . , W1,n1} and {b1,1, b1,2, . . . , b1,n1} are their weight matrices and bias vectors, respectively. With ohrt as the input, the validity score valhrt of (h, r, t) is computed via a fully-connected layer and then the sigmoid operation: valhrt = σ(ohrtWval + bval), (2) where Wval and bval are the weight matrix and bias variable, respectively; σ(x) = 1 1+e−x is the sigmoid function, which constrains valhrt ∈(0, 1). For simplicity, the number of hidden nodes in each fully-connected layer of “hrt-FCNs” and “FCN1” gradually reduces with the same difference between layers. 4.3 Compatibility Evaluation This component estimates the compatibility of Fct. It contains three sub-processes, i.e., the capture of the interaction vector between (h, r, t) and each auxiliary description ai : vi (i = 1, 2, . . . , m), the acquisition of the overall interaction vector, and the assessment of the compatibility of Fct, corresponding to “hrtav-FCNs”, “min” and “FCN2” in Figure 1, respectively. Similar to “hrt-FCNs”, we obtain the interaction vector ohrtaivi of (h, r, t) and ai :vi: ohrtaivi=f(f(· · ·f(f([h; r; t; ai; vi]W2,1+b2,1)· W2,2 + b2,2) · · · )W2,n2 + b2,n2), (3) where n2 is the number of the neural network layers; {W2,1, W2,2, . . . , W2,n2} and {b2,1, b2,2, . . . , b2,n2} are their weight matrices and bias vectors, respectively. The number of hidden nodes in each fully-connected layer also gradually reduces with the same difference between layers. And the dimension of the resulting ohrtaivi is d. All the auxiliary descriptions share the same parameters in this sub-process. The overall interaction vector ohrtav of Fct is generated based on ohrtaivi. Before introducing this sub-process, let us see the principle behind first. Straightforwardly, if Fct is valid, (h, r, t) should be compatible with any of its auxiliary description. Then, the values of their interaction vector, measuring the compatibility in many different views, are all encouraged to be large. Therefore, for each dimension, the minimum over it of all the interaction vectors is not allowed to be too small. Thus, the overall interaction vector ohrtav of (h, r, t) and its auxiliary description(s) is: ohrtav = minm i=1(ohrtaivi), (4) where min(·) is the element-wise minimizing function. Then, similar to “FCN1”, we obtain the compatibility score compFct of Fct: compFct = σ(ohrtavWcomp + bcomp), (5) where Wcomp of dimension d × 1 and bcomp are the weight matrix and bias variable, respectively. 4.4 Final Score and Loss Function The final score sFct of Fct is the weighted sum ⊕ of the above validity score and compatibility score: sFct = valhrt ⊕compFct = w · valhrt + (1 −w) · compFct, (6) where w ∈(0, 1) is the weight factor. If the arity of Fct is 2, the final score is equal to the validity score of the primary triple (h, r, t). Then, Equation (6) is reduced to: sFct = valhrt. (7) Currently, we obtain the final score sFct of Fct. In addition, Fct has its target score lFct. By comparing sFct with lFct, we get the binary crossentropy loss: LFct =−lFct logsFct−(1−lFct) log(1−sFct), (8) where lFct = 1, if Fct ∈T, otherwise Fct ∈T −, lFct = 0. Here, T is the training set and T −is the set of negative samples constructed by corrupting the n-ary facts in T. Specifically, for each n-ary fact in T, we randomly replace one of its elements with a random element in E/R to generate one negative sample not contained in T. We then optimize NeuInfer via backpropagation, and Adam (Kingma and Ba, 2015) with learning rate λ is used as the optimizer. 6146 5 Experiments 5.1 Datasets and Metrics We conduct experiments on two n-ary datasets. The first one is JF17K (Wen et al., 2016; Zhang et al., 2018), derived from Freebase (Bollacker et al., 2008). In JF17K, an n-ary relation of a certain type is defined by a fixed number of ordered attributes. Then, any n-ary fact of this relation is denoted as an ordered sequence of attribute values corresponding to the attributes. For example, for all n-ary facts of the n-ary relation olympics.olympic medal honor, they all have four attribute values (e.g., 2008 Summer Olympics, United States, Natalie Coughlin, and Swimming at the 2008 Summer Olympics – Women′s 4×100 metre freestyle relay), corresponding to the four ordered attributes of this n-ary relation. The second one is WikiPeople (Guan et al., 2019), derived from Wikidata (Vrandeˇci´c and Kr¨otzsch, 2014). Its n-ary facts are more diverse than JF17K’s. For example, for all n-ary facts that narrate award-received, some have the attribute together-with, while some others do not. Thus, WikiPeople is more difficult. To run NeuInfer on JF17K and WikiPeople, we transform the representation of their n-ary facts. For JF17K, we need to convert each attribute value sequence of a specific n-ary relation to a primary triple coupled with a set of its auxiliary description(s). The core of this process is to determine the primary triple, formed by merging the two primary attributes of the n-ary relation and the corresponding attribute values. The two primary attributes are selected based on RAE (Zhang et al., 2018). For each attribute of the n-ary relation, we count the number of its distinct attribute values from all the n-ary facts of this relation. The two attributes that correspond to the largest and second-largest numbers are chosen as the two primary attributes. For WikiPeople, since there is a primary triple for each n-ary fact in Wikidata, with its help, we simply reorganize a set of attribute-value pairs in WikiPeople to a primary triple coupled with a set of its auxiliary description(s). The statistics of the datasets after conversion or reorganization are outlined in Table 1, where #Train, #V alid, and #Test are the sizes of the training set, validation set, and test set, respectively. As for metrics, we adopt the standard Mean ReDataset |R| |E| #Train #V alid #Test JF17K 501 28,645 76,379 24,568 WikiPeople 193 47,765 305,725 38,223 38,281 Table 1: The statistics of the datasets. ciprocal Rank (MRR) and Hits@N. For each n-ary test fact, one of its elements is removed and replaced by all the elements in E/R. These corrupted n-ary facts are fed into NeuInfer to obtain the final scores. Based on these scores, the n-ary facts are sorted in descending order, and the rank of the n-ary test fact is stored. Note that, except the nary test fact, other corrupted n-ary facts existing in the training/validation/test set, are discarded before sorting. This process is repeated for all other elements of the n-ary test fact. Then, MRR is the average of these reciprocal ranks, and Hits@N is the proportion of the ranks less than or equal to N. Knowledge inference includes entity inference and relation inference. As presented in Table 1, the number of relations and attributes in each dataset is far less than that of entities and attribute values (on JF17K, |R| = 501, while |E| = 28, 645; on WikiPeople, |R| = 193, while |E| = 47, 765). That is, inferring a relation/attribute is much simpler than inferring an entity/attribute value. Therefore, we adopt MRR and Hits@{1, 3, 10} on entity inference, while pouring attention to more finegrained metrics, i.e., MRR and Hits@1 on relation inference. 5.2 Experimental Settings The hyper-parameters of NeuInfer are tuned via grid search in the following ranges: The embedding dimension k ∈{50, 100}, the batch size β ∈{128, 256}, the learning rate λ ∈{5e−6, 1e−5, 5e−5, 1e−4, 5e−4, 1e−3}, the numbers n1 and n2 of the neural network layers of “hrt-FCNs” and “hrtav-FCNs” in {1, 2}, the dimension d of the interaction vector ohrtaivi in {50, 100, 200, 400, 500, 800, 1000, 1200}, the weight factor w of the scores in {0.1, 0.2, . . . , 0.9}. The adopted optimal settings are: k = 100, β = 128, λ = 5e−5, n1 = 2, n2 = 1, d = 1200, and w = 0.1 for JF17K; k = 100, β = 128, λ = 1e−4, n1 = 1, n2 = 1, d = 1000, and w = 0.3 for WikiPeople. 5.3 Simple Knowledge Inference Simple knowledge inference includes simple entity inference and simple relation inference. For an nary fact, they infer one of the entities/the relation in 6147 Method JF17K WikiPeople MRR Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 RAE 0.310 0.219 0.334 0.504 0.172 0.102 0.182 0.320 NaLP 0.366 0.290 0.391 0.516 0.338 0.272 0.364 0.466 NeuInfer 0.517 0.436 0.553 0.675 0.350 0.282 0.381 0.467 Table 2: Experimental results of simple entity inference. the primary triple or the attribute value/attribute in an auxiliary description, given its other information. 5.3.1 Baselines Knowledge inference methods on n-ary facts are scarce. The representative methods are mTransH (Wen et al., 2016) and its modified version RAE (Zhang et al., 2018), and the state-of-the-art one is NaLP (Guan et al., 2019). As m-TransH is worse than RAE, following NaLP, we do not adopt it as a baseline. 5.3.2 Simple Entity Inference The experimental results of simple entity inference are reported in Table 2. From the results, it can be observed that NeuInfer performs much better than the best baseline NaLP, which verifies the superiority of NeuInfer. Specifically, on JF17K, the performance gap between NeuInfer and NaLP is significant. In essence, 0.151 on MRR, 14.6% on Hits@1, 16.2% on Hits@3, and 15.9% on Hits@10. On WikiPeople, NeuInfer also outperforms NaLP. It testifies the strength of NeuInfer treating the information in the same n-ary fact discriminatingly. By differentiating the primary triple from other auxiliary description(s), NeuInfer considers the validity of the primary triple and the compatibility between the primary triple and its auxiliary description(s) to model each n-ary fact more appropriately and reasonably. Thus, it is not surprising that NeuInfer beats the baselines. And on simpler JF17K (see Section 5.1), NeuInfer gains more significant performance improvement than on WikiPeople. 5.3.3 Simple Relation Inference Since RAE is deliberately developed only for simple entity inference, we compare NeuInfer only with NaLP on simple relation inference. Table 3 demonstrates the experimental results of simple relation inference. From the table, we can observe that NeuInfer outperforms NaLP consistently. Detailedly, on JF17K, the performance improvement of NeuInfer on MRR and Hits@1 is 0.036 and 7.0%, respectively; on WikiPeople, they are 0.030 and 9.1%, respectively. It is ascribed to the reasonable modeling of n-ary facts, which not only improves the performance of simple entity inference but also is beneficial to pick the exact right relations/attributes out. Method JF17K WikiPeople MRR Hits@1 MRR Hits@1 NaLP 0.825 0.762 0.735 0.595 NeuInfer 0.861 0.832 0.765 0.686 Table 3: Experimental results of simple relation inference. 5.4 Ablation Study We perform an ablation study to look deep into the framework of NeuInfer. If we remove the compatibility evaluation component, NeuInfer is reduced to a method for binary but not n-ary facts. Since we handle knowledge inference on n-ary facts, it is inappropriate to remove this component. Thus, as an ablation, we only deactivate the validity evaluation component, denoted as NeuInfer−. The experimental comparison between NeuInfer and NeuInfer− is illustrated in Figure 2. It can be observed from the figure that NeuInfer outperforms NeuInfer− significantly. It suggests that the validity evaluation component plays a pivotal role in our method. Thus, each component of our method is necessary. 5.5 Flexible Knowledge Inference The newly proposed flexible knowledge inference focuses on n-ary facts of arities greater than 2. It includes flexible entity inference and flexible relation inference. For an n-ary fact, they infer one of the entities/the relation in the primary triple given any number of its auxiliary description(s) or infer the attribute value/attribute in an auxiliary description given the primary triple and any number of other auxiliary description(s). In existing knowledge inference methods on n-ary facts, each n-ary fact is represented as a group of peer attributes and attribute values. These methods have not poured attention to the above flexible knowledge inference. Thus, we conduct this new type of task only on 6148 MRR Hits@1 Hits@3 Hits@10 Ablation study of simple entity inference on JF17K 0.400 0.500 0.600 0.700 Scores 0.517 0.436 0.553 0.675 0.433 0.379 0.465 0.529 MRR Hits@1 Hits@3 Hits@10 Ablation study of simple entity inference on WikiPeople 0.000 0.200 0.400 0.350 0.282 0.381 0.467 0.050 0.033 0.055 0.085 NeuInfer NeuInfer MRR Hits@1 Hits@3 Hits@10 Ablation study of simple relation inference on JF17K 0.700 0.800 0.900 0.861 0.832 0.886 0.904 0.710 0.702 0.713 0.717 MRR Hits@1 Hits@3 Hits@10 Ablation study of simple relation inference on WikiPeople 0.250 0.500 0.750 1.000 0.765 0.686 0.828 0.897 0.211 0.183 0.209 0.229 Figure 2: The experimental comparison between NeuInfer and NeuInfer−. Dataset Flexible entity inference Flexible relation inference MRR Hits@1 Hits@3 Hits@10 MRR Hits@1 JF17K 0.398 0.348 0.422 0.494 0.616 0.599 WikiPeople 0.200 0.161 0.208 0.276 0.477 0.416 Table 4: Experimental results of flexible knowledge inference. NeuInfer. Before elaborating on the experimental results, let us look into the new test set used in this section first. 5.5.1 The New Test Set We generate the new test set as follows: • Collect the n-ary facts of arities greater than 2 from the test set. • For each collected n-ary fact, compute all the subsets of the auxiliary description(s). The primary triple and each subset form a new n-ary fact, which is added to the candidate set. • Remove the n-ary facts that also exist in the training/validation set from the candidate set and then remove the duplicate n-ary facts. The remaining n-ary facts form the new test set. The size of the resulting new test set on JF17K is 34,784, and that on WikiPeople is 13,833. 5.5.2 Flexible Entity and Relation Inference The experimental results of flexible entity and relation inference on these new test sets are presented in Table 4. It can be observed that NeuInfer well tackles flexible entity and relation inference on partial facts, and achieves excellent performance. We also attribute this to the reasonable modeling of n-ary facts. For each n-ary fact, NeuInfer distinguishes the primary triple from other auxiliary description(s) and models them properly. Thus, NeuInfer well handles various types of entity and relation inference concerning the primary triple coupled with any number of its auxiliary description(s). 5.6 Performance under Different Scenarios To further analyze the effectiveness of the proposed NeuInfer method, we look into the breakdown of its performance on different arities, as well as on primary triples and auxiliary descriptions. Without loss of generality, here we report only the experimental results on simple entity inference. The test sets are grouped into binary and n-ary (n > 2) categories according to the arities of the facts. Table 5 presents the experimental results of simple entity inference on these two categories of JF17K and WikiPeople. From the tables, we can observe that NeuInfer consistently outperforms the baselines on both categories on simpler JF17K. On more difficult WikiPeople, NeuInfer is comparable to the best baseline NaLP on the binary category and gains much better performance on the n-ary category in terms of the fine-grained MRR and Hits@1. In general, NeuInfer performs much better on JF17K than on WikiPeople. We attribute this to the simplicity of JF17K. Where does the above performance improvement come from? Is it from inferring the head/tail 6149 Dataset Method MRR Hits@1 Hits@3 Hits@10 Binary N-ary Binary N-ary Binary N-ary Binary N-ary JF17K RAE 0.115 0.397 0.050 0.294 0.108 0.434 0.247 0.618 NaLP 0.118 0.477 0.058 0.394 0.121 0.512 0.246 0.637 NeuInfer 0.267 0.628 0.173 0.554 0.300 0.666 0.462 0.770 WikiPeople RAE 0.169 0.187 0.096 0.126 0.178 0.198 0.323 0.306 NaLP 0.351 0.283 0.291 0.187 0.374 0.322 0.465 0.471 NeuInfer 0.350 0.349 0.278 0.303 0.385 0.364 0.473 0.439 Table 5: Experimental results of simple entity inference on binary and n-ary categories of JF17K and WikiPeople. Dataset Method MRR Hits@1 Hits@3 Hits@10 Binary N-ary Overall Binary N-ary Overall Binary N-ary Overall Binary N-ary Overall JF17K NaLP 0.118 0.456 0.313 0.058 0.369 0.237 0.121 0.491 0.334 0.246 0.625 0.464 NeuInfer 0.267 0.551 0.431 0.173 0.467 0.342 0.300 0.588 0.466 0.462 0.720 0.611 WikiPeople NaLP 0.351 0.237 0.337 0.291 0.161 0.276 0.374 0.262 0.361 0.465 0.384 0.455 NeuInfer 0.350 0.280 0.342 0.278 0.225 0.272 0.385 0.299 0.382 0.473 0.375 0.463 Table 6: Detailed experimental results on inferring head/tail entities. Method JF17K WikiPeople MRR Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 NaLP 0.510 0.432 0.545 0.655 0.345 0.223 0.402 0.589 NeuInfer 0.746 0.687 0.787 0.848 0.443 0.408 0.453 0.516 Table 7: Experimental results on inferring attribute values. entities in primary triples or the attribute values in auxiliary descriptions? To go deep into it, we study the performance of NeuInfer on inferring the head/tail entities and the attribute values and compare it with the best baseline NaLP. The detailed experimental results are demonstrated in Tables 6 and 7. It can be observed that NeuInfer brings more performance gain on inferring attribute values. It indicates that combining the validity of the primary triple and the compatibility between the primary triple and its auxiliary description(s) to model each n-ary fact is more effective than only considering the relatedness of attribute-value pairs in NaLP, especially for inferring attribute values. 6 Conclusions In this paper, we distinguished the information in the same n-ary fact and represented each n-ary fact as a primary triple coupled with a set of its auxiliary description(s). We then proposed a neural network model, NeuInfer, for knowledge inference on n-ary facts. NeuInfer combines the validity evaluation of the primary triple and the compatibility evaluation of the n-ary fact to obtain the validity score of the n-ary fact. In this way, NeuInfer has the ability of well handling simple knowledge inference, which copes with the inference on whole facts. Furthermore, NeuInfer is capable of dealing with the newly proposed flexible knowledge inference, which tackles the inference on partial facts consisting of a primary triple coupled with any number of its auxiliary descriptive attributevalue pair(s). Experimental results manifest the merits and superiority of NeuInfer. Particularly, on simple entity inference, NeuInfer outperforms the state-of-the-art method significantly in terms of all the metrics. NeuInfer improves the performance of Hits@3 even by 16.2% on JF17K. In this paper, we use only n-ary facts in the datasets to conduct knowledge inference. For future works, to further improve the method, we will explore the introduction of additional information, such as rules and external texts. Acknowledgments The work is supported by the National Key Research and Development Program of China under grant 2016YFB1000902, the National Natural Science Foundation of China under grants U1911401, 61772501, U1836206, 91646120, and 61722211, the GFKJ Innovation Program, Beijing Academy of Artificial Intelligence (BAAI) under grant BAAI2019ZD0306, and the Lenovo-CAS Joint Lab Youth Scientist Project. 6150 References Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, pages 1247–1250. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Proceedings of the 26th International Conference on Neural Information Processing Systems, pages 2787–2795. Mingyang Chen, Wen Zhang, Wei Zhang, Qiang Chen, and Huajun Chen. 2019. Meta relational learning for few-shot link prediction in knowledge graphs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 4208–4217, Hong Kong, China. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2D knowledge graph embeddings. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pages 1811–1818. Boyang Ding, Quan Wang, Bin Wang, and Li Guo. 2018. Improving knowledge graph embedding using simple constraints. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 110–121. Li Dong, Furu Wei, Ming Zhou, and Ke Xu. 2015. Question answering over Freebase with multicolumn convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 260–269. Takuma Ebisu and Ryutaro Ichise. 2018. TorusE: Knowledge graph embedding on a Lie group. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pages 1819–1826. Saiping Guan, Xiaolong Jin, Yuanzhuo Wang, and Xueqi Cheng. 2018. Shared embedding based neural networks for knowledge graph completion. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pages 247–256. Saiping Guan, Xiaolong Jin, Yuanzhuo Wang, and Xueqi Cheng. 2019. Link prediction on n-ary relational data. In Proceedings of the 28th International Conference on World Wide Web, pages 583–593. Shu Guo, Quan Wang, Bin Wang, Lihong Wang, and Li Guo. 2015. Semantically smooth knowledge graph embedding. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 84–94. Prachi Jain, Pankaj Kumar, Soumen Chakrabarti, et al. 2018. Type-sensitive knowledge base inference without explicit type supervision. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 75–80. Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge graph embedding via dynamic mapping matrix. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 687–696. Yantao Jia, Yuanzhuo Wang, Hailun Lin, Xiaolong Jin, and Xueqi Cheng. 2016. Locally adaptive translation for knowledge graph embedding. In Proceedings of the 30th AAAI Conference on Artificial Intelligence, pages 992–998. Xiaotian Jiang, Quan Wang, and Bin Wang. 2019. Adaptive convolution for multi-relational learning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 978–987, Minneapolis, Minnesota. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Representations. Yankai Lin, Zhiyuan Liu, Huanbo Luan, Maosong Sun, Siwei Rao, and Song Liu. 2015a. Modeling relation paths for representation learning of knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 705–714. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015b. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the 29th AAAI Conference on Artificial Intelligence, pages 2181–2187. Denis Lukovnikov, Asja Fischer, Jens Lehmann, and S¨oren Auer. 2017. Neural network-based question answering over knowledge graphs on word and character level. In Proceedings of the 26th International Conference on World Wide Web, pages 1211–1220. Deepak Nathani, Jatin Chauhan, Charu Sharma, and Manohar Kaul. 2019. Learning attention-based embeddings for relation prediction in knowledge graphs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4710–4723, Florence, Italy. Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2018. A novel embedding model for knowledge base completion based on convolutional neural network. In Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 327–333. 6151 Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. 2016. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1):11–33. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on Machine Learning, pages 809–816. Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: A core of semantic knowledge. In Proceedings of the 16th International Conference on World Wide Web, pages 697–706. Yi Tay, Anh Tuan Luu, and Siu Cheung Hui. 2017. Non-parametric estimation of multiple embeddings for link prediction on dynamic knowledge graphs. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, pages 1243–1249. Th´eo Trouillon, Johannes Welbl, Sebastian Riedel, Eric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceedings of the 33rd International Conference on Machine Learning, pages 2071–2080. Denny Vrandeˇci´c and Markus Kr¨otzsch. 2014. Wikidata: A free collaborative knowledgebase. Communications of the ACM, 57(10):78–85. Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724– 2743. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the 28th AAAI Conference on Artificial Intelligence, pages 1112–1119. Jianfeng Wen, Jianxin Li, Yongyi Mao, Shini Chen, and Richong Zhang. 2016. On the representation and embedding of knowledge bases beyond binary relations. In Proceedings of the 25th International Joint Conference on Artificial Intelligence, pages 1300–1307. Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2016. TransG: A generative model for knowledge graph embedding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 2316–2325. Richong Zhang, Junpeng Li, Jiajie Mei, and Yongyi Mao. 2018. Scalable instance reconstruction in knowledge bases via relatedness affiliated embedding. In Proceedings of the 27th International Conference on World Wide Web, pages 1185–1194.
2020
546
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6152–6158 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6152 Neural Graph Matching Networks for Chinese Short Text Matching Lu Chen, Yanbin Zhao, Boer Lv, Lesheng Jin, Zhi Chen, Su Zhu, Kai Yu∗ MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University SpeechLab, Department of Computer Science and Engineering Shanghai Jiao Tong University, Shanghai, China {chenlusz, zhaoyb, boerlv, King18817550378}@sjtu.edu.cn {zhenchi713, paul2204, kai.yu}@sjtu.edu.cn Abstract Chinese short text matching usually employs word sequences rather than character sequences to get better performance. However, Chinese word segmentation can be erroneous, ambiguous or inconsistent, which consequently hurts the final matching performance. To address this problem, we propose neural graph matching networks, a novel sentence matching framework capable of dealing with multi-granular input information. Instead of a character sequence or a single word sequence, paired word lattices formed from multiple word segmentation hypotheses are used as input and the model learns a graph representation according to an attentive graph matching mechanism. Experiments on two Chinese datasets show that our models outperform the state-of-the-art short text matching models. 1 Introduction Short text matching (STM) is a fundamental task of natural language processing (NLP). It is usually recognized as a paraphrase identification task or a sentence semantic matching task. Given a pair of sentences, a matching model is to predict their semantic similarity. It is widely used in question answer systems and dialogue systems (Gao et al., 2019; Yu et al., 2014). The recent years have seen advances in deep learning methods for text matching (Mueller and Thyagarajan, 2016; Gong et al., 2017; Chen et al., 2017; Lan and Xu, 2018). However, almost all of these models are initially proposed for English text matching. Applying them for Chinese text matching, we have two choices. One is to take Chinese characters as the input of models. Another is first to segment each sentence into words, and then to take these words as input tokens. Although character-based models can overcome the ∗Kai Yu is the corresponding author. Figure 1: An example of the word segmentation and the corresponding word lattice problem of data sparsity to some degree (Li et al., 2019), the main drawback of these models is that explicit word information is not fully exploited, which can be potentially useful for semantic matching. However, word-based models often suffer some potential issues caused by word segmentation. As shown in Figure 1, the character sequence “南京市长江大桥(South Capital City Long River Big Bridge)” has two different meanings with different word segmentation. The first one refers to a bridge (Segment-1, Segment-2), and the other refers to a person (Segment-3). The ambiguity may be eliminated with more context. Additionally, the segmentation granularity of different tools is different. For example, “长江大 桥(Yangtze River Bridge)” in Segment-1 is divided into two words “长江(Yangtze River)” and “大 桥(Bridge)” in Segment-2. It has been shown that multi-granularity information is important for text matching (Lai et al., 2019). Here we propose a neural graph matching method (GMN) for Chinese short text matching. Instead of segmenting each sentence into a word sequence, we keep all possible segmentation paths to form a word lattice graph, as shown in Figure 1. GMN takes a pair of word lattice graphs as input and updates the representations of nodes according to the graph matching attention mechanism. Also, 6153 GMN can be combined with pre-trained language models, e.g. BERT (Devlin et al., 2019). It can be regarded as a method to integrate word information in these pre-trained language models during the fine-tuning phase. The experiments on two Chinese Datasets show that our model outperforms not only previous state-of-the-art models but also the pre-trained model BERT as well as some variants of BERT. 2 Problem Statement First, we define the Chinese short text matching task in a formal way. Given two Chinese sentences Sa = {ca 1, ca 2, · · · , ca ta} and Sb = {cb 1, cb 2, · · · , cb tb}, the goal of a text matching model f(Sa, Sb) is to predict whether the semantic meaning of Sa and Sb is equal. Here, ca i and cb j represent the i-th and j-th Chinese character in the sentences respectively, and ta and tb denote the number of characters in the sentences. In this paper, we propose a graph-based matching model. Instead of segmenting each sentence into a word sequence, we keep all possible segmentation paths to form a word lattice graph G = (V, E). V is the set of nodes and includes all character subsequences that match words in a lexicon D. E is the set of edges. If a node vi ∈V is adjacent to another node vj ∈V in the original sentence, then there is an edge eij between them. Nfw(vi) denotes the set of all reachable nodes of node vi in its forward direction, while Nbw(vi) denotes the set of all reachable nodes of node vi in its backward direction. With two graphs Ga = (Va, Ea) and Gb = (Vb, Eb), our graph matching model is to predict their similarity, which indicates whether the original sentences Sa and Sb have the same meaning or not. 3 Proposed Framework As shown in Figure 2, our model consists of three components: a contextual node embedding module (BERT), a graph matching module, and a relation classifier. 3.1 Contextual Node Embedding For each node vi in graphs, its initial node embedding is the attentive pooling of contextual character representations. Concretely, we first concat the original character-level sentences to form a new sequence S = Figure 2: Overview of our proposed framework {[CLS], ca 1, · · · , ca ta, [SEP], cb 1, · · · , cb tb, [SEP]}, and then feed them to the BERT model to obtain the contextual representations for each charater cCLS, ca 1, · · · , ca ta, cSEP, cb 1, · · · , cb tb, cSEP. Assuming that the node vi consists of ni consecutive character tokens {csi, csi+1, · · · , csi+ni−1}1, a feature-wised score vector ˆusi+k is calculated with a feed forward network (FNN) with two layers for each character csi+k, i.e. ˆusi+k = FFN(csi+k), and then normalized with feature-wised multidimensional softmax. The corresponding character embedding csi+k is weighted with the normalised scores usi+k to obtain the initial node embedding vi = Pn−1 k=0 usi+k ⊙csi+k, where ⊙represents element-wise product of two vectors. 3.2 Neural Graph Matching Module Our proposed neural graph matching module is based on graph neural networks (GNNs) (Scarselli et al., 2009). GNNs are widely applied in various NLP tasks, such as text classification (Yao et al., 2019), machine translation (Marcheggiani et al., 2018), Chinese word segmentation (Yang et al., 2019), Chinese named entity recognition (Zhang 1Here si denotes the index of the first character of vi in the sentence Sa or Sb. For brevity, the superscript of csi+k is omitted. 6154 and Yang, 2018), dialogue policy optimization (Chen et al., 2018c, 2019, 2018b), and dialogue state tracking (Chen et al., 2020; Zhu et al., 2020), etc. To the best of our knowledge, we are the first to introduce GNN in Chinese shot text matching. The neural graph matching module takes the contextual node embedding vi as the initial representation h0 i for the node vi, then updates its representation from one step (or layer) to the next with two sub-steps: message propagation and representation updating. Without loss of generality, we will use nodes in Ga to describe the update process of node representations, and the update process for nodes in Gb is similar. Message Propagation At l-th step, each node vi in Ga will not only aggregate messages mfw i and mbw i from its reachable nodes in two directions: mfw i = X vj∈Nfw(vi) αij  Wfwhl−1 j  , mbw i = X vk∈Nbw(vi) αik  Wbwhl−1 k  , (1) but also aggregate messages mb1 i and mb2 i from all nodes in graph Gb, mb1 i = X vm∈Vb αim  Wfwhl−1 m  , mb2 i = X vq∈Vb αiq  Wbwhl−1 q  . (2) Here αij, αik, αim and αiq are attention coefficients (Vaswani et al., 2017). The parameters Wfw and Wbw as well as the parameters for attention coefficients are shared in Eq. (1) and Eq. (2). We define mself i ≜[mfw i , mbw i ] and mcross i ≜[mb1 i , mb2 i ]. With this sharing mechanism, the model has a nice property that, when the two graphs are perfectly matched, we have mself i ≈mcross i . The reason why they are not exactly equal is that the node vi can only aggregate messages from its reachable nodes in graph Ga, while it can aggregate messages from all nodes in Gb. Representation Updating After aggregating messages, each node vi will update its representation from hl−1 i to hl i. Here we first compare two messages mself i and mcross i with multi-perspective cosine distance (Wang et al., 2017), dk = cosine  wcos k ⊙mself i , wcos k ⊙mcross i  , (3) Dataset Size Pos:Neg Domain BQ 120,000 1:1 bank LCQMC 260,068 1.3:1 open-domain Table 1: Features of two datasets BQ and LCQMC where k ∈{1, 2, · · · , P} (P is number of perspectives). wcos k is a parameter vector, which assigns different weights to different dimensions of messages. With P distances d1, d2, · · · , dP , we update the representation of vi, hl i = FFN h mself i , di i , (4) where [·, ·] denotes the concatation of two vectors, di ≜[d1, d2, · · · , dP ]. FFN is a feed forward network with two layers. After updating node representation L steps, we will obtain the graph-aware representation hL i for each node vi. hL i includes not only the information from its reachable nodes but also information of pairwise comparison with all nodes in another graph. The graph level representations ga and gb for two graphs Ga and Gb are computed by attentive pooling of representations of all nodes in each graph. 3.3 Relation Classifier With two graph level representations ga and gb, we can predict the similarity of two graphs or sentences, p = FFN h ga, gb, ga ⊙gb, |ga −gb| i , (5) where p ∈[0, 1]. During the training phase, the training object is to minimize the binary crossentropy loss. 4 Experiments 4.1 Experimental Setup Dataset We conduct experiments on two Chinese datasets for semantic textual similarity: LCQMC (Liu et al., 2018) and BQ (Chen et al., 2018a). LCQMC is a large-scale open-domain corpus for question matching, while BQ is a domain-specific corpus for bank question matching. The sample in both datasets contains a pair of sentences and a binary label indicating whether the two sentences have the same meaning or share the same intention. All features of the two datasets are summarized in Table 1. For each dataset, the accuracy (ACC) and F1 score are used as the evaluation metrics. 6155 Models BQ LCQMC ACC. F1 ACC. F1 Text-CNN 68.5 69.2 72.8 75.7 BiLSTM 73.5 72.7 76.1 78.9 Lattice-CNN 78.2 78.3 82.1 82.4 BiMPM 81.9 81.7 83.3 84.9 ESIM-char 79.2 79.3 82.0 84.0 ESIM-word 81.9 81.9 82.6 84.5 GMN (Ours) 84.2 84.1 84.6 86.0 BERT 84.5 84.0 85.7 86.8 BERT-wwm 84.9 86.8 BERT-wwm-ext 84.8 86.6 ERNIE 84.6 87.0 GMN-BERT (Ours) 85.6 85.5 87.3 88.0 Table 2: Performance of various models on LCQMC and BQ test datasets Hyper-parameters The number of graph updating steps/layers L is 2 on both datasets. The dimension of node representation is 128. The dropout rate for all hidden layers is 0.2. The number of matching perspectives P is 20. Each model is trained by RMSProp with an initial learning rate of 0.0001 and a batch size of 32. We use the vocabulary provided by Song et al. (2018) to build the lattice. 4.2 Main Results We compare our models with two types of baselines: basic neural models without pre-training and BERT-based models pre-trained on largescale corpora. The basic neural approaches also can be divided into two groups: representationbased models and interaction-based models. The representation-based models calculate the sentence representations independently and use the distance as the similarity score. Such models include TextCNN (Kim, 2014), BiLSTM (Graves and Schmidhuber, 2005) and Lattice-CNN (Lai et al., 2019). Note that Lattice-CNN also takes word lattices as input. The interaction-based models consider the interaction between two sentences when calculating sentence representations, which include BiMPM (Wang et al., 2017) and ESIM (Chen et al., 2017). ESIM has achieved state-of-the-art results on various matching tasks (Bowman et al., 2015; Chen and Wang, 2019; Williams et al., 2018). For pre-trained models, we consider BERT and its several variants such as BERT-wmm (Cui et al., 2019), BERT-wmm-ext (Cui et al., 2019) and ERNIE (Sun et al., 2019; Cui et al., 2019). One common feature of these variants of BERT is that they all use word information during the pre-trained phase. We use 84.20 84.32 84.59 84.60 84.00 84.10 84.20 84.30 84.40 84.50 84.60 84.70 PKU JIEBA JIEBA+PKU LATTICE Figure 3: Performance (ACC) of GMN with different inputs on LCQMC dataset GMN-BERT to denote our proposed model. We also employ a character-level transformer encoder instead of BERT as the contextual node embedding module described in Section 3.1, which is denoted as GMN. The comparison results are reported in Table 2. From the first part of the results, we can find that our GMN performs better than five baselines on both datasets. Also, the interaction-based models in general outperform the representation based models. Although Lattice-CNN 2 also utilizes word lattices, it has no node-level comparison due to the limits of its structure, which causes significant performance degradation. As for interactionbased models, although they both use the multiperspective matching mechanism, GMN outperforms BiMPM and ESIM (char and word) 3, which indicates that the utilization of word lattice with our neural graph matching networks is powerful. From the second part of Table 2, we can find that the three variants of BERT (BERT-wwm, BERTwwn-ext, ERNIE) 4 all outperform the original BERT, which indicates using word-level information during pre-training is important for Chinese matching tasks. Our model GMN-BERT performs better than all these BERT-based models. It shows that utilizing word information during the finetuning phase with GMN is an effective way to boost the performance of BERT for Chinese semantic matching. 2The results of Lattice-CNN is produced by the open source code https://github.com/Erutan-pku/LCN-for-ChineseQA. 3The results of ESIM is produced by the open source code https://github.com/lanwuwei/SPM toolkit. 4The results of BERT-wwm, BERT-wwm-ext and ERNIE are taken from the paper (Cui et al., 2019). 6156 Figure 4: Examples of different prediction of Jieba and Lattice 4.3 Analysis In this section, we investigate the effect of word segmentation on our model GMN. A word sequence can be regarded as a thin graph. Therefore, it can be used to replace the word lattice as the input of GMN. As shown in Figure 3, we compare four models: Lattice is our GMN with word lattice as the input. PKU and JIEBA are similar to Lattice except that their input is word sequence produced by two word segmentation tools: Jieba 5 and pkuseg (Luo et al., 2019), while the input of JIEBA+PKU is a small lattice graph generated by merging two word segmentation results. We can find that lattice-based models (Lattice and JIEBA+PKU) performs much better then wordbased models (PKU and JIEBA). We can also find that the performance of PKU+JIEBA is very close to the performance of Lattice. The union of different word segmentation results can be regarded as a tiny lattice, which is usually the sub-graph of the overall lattice. Compared with the tiny graph, the overall lattice has more noisy nodes (i.e. invalid words in the corresponding sentence). Therefore We think it is reasonable that the performance of tiny lattice (PKU+JIEBA) is comparable to the performance of the overall lattice (Lattice). On 5https://github.com/fxsjy/jieba the other hand, this indicates that our model has the ability to deal with the introduced noisy information in the lattice graph. In Figure 4, we give two examples to show that word segmentation errors result in incorrect prediction of JIEBA, while Lattice can give the right answers. 5 Conclusion In this paper, we propose a neural graph matching model for Chinese short text matching. It takes a pair of word lattices as input instead of word or character sequences. The utilization of word lattice can provide more multi-granularity information and avoid the error propagation issue of word segmentation. Additionally, our model and the pre-training model are complementary. It can be regarded as a flexible method to introduce word information into BERT during the fine-tuning phase. The experimental results show that our model outperforms the state-of-the-art text matching models as well as some BERT-based models. Acknowledgments This work has been supported by the National Key Research and Development Program of China (Grant No. 2017YFB1002102) and Shanghai Jiao Tong University Scientific and Technological Innovation Funds (YG2020YQ01). 6157 References Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. Jing Chen, Qingcai Chen, Xin Liu, Haijun Yang, Daohe Lu, and Buzhou Tang. 2018a. The bq corpus: A large-scale domain-specific chinese corpus for sentence semantic equivalence identification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4946–4951. Lu Chen, Cheng Chang, Zhi Chen, Bowen Tan, Milica Gaˇsi´c, and Kai Yu. 2018b. Policy adaptation for deep reinforcement learning-based dialogue management. In Proceedings of IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pages 6074–6078. IEEE. Lu Chen, Zhi Chen, Bowen Tan, Sishan Long, Milica Gasic, and Kai Yu. 2019. Agentgraph: Towards universal dialogue management with structured deep reinforcement learning. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 27(9):1378–1391. Lu Chen, Boer Lv, Chi Wang, Su Zhu, Bowen Tan, and Kai Yu. 2020. Schema-guided multi-domain dialogue state tracking with graph attention neural networks. In The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI). Lu Chen, Bowen Tan, Sishan Long, and Kai Yu. 2018c. Structured dialogue policy with graph neural networks. In Proceedings of the 27th International Conference on Computational Linguistics (COLING), pages 1257–1268. Qian Chen and Wen Wang. 2019. Sequential matching model for end-to-end multi-turn response selection. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7350–7354. IEEE. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for natural language inference. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1657–1668. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre-training with whole word masking for chinese bert. arXiv preprint arXiv:1906.08101. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Jianfeng Gao, Michel Galley, Lihong Li, et al. 2019. Neural approaches to conversational ai. Foundations and Trends R⃝in Information Retrieval, 13(23):127–298. Yichen Gong, Heng Luo, and Jian Zhang. 2017. Natural language inference over interaction space. arXiv preprint arXiv:1709.04348. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural networks, 18(5-6):602–610. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751. Yuxuan Lai, Yansong Feng, Xiaohan Yu, Zheng Wang, Kun Xu, and Dongyan Zhao. 2019. Lattice cnns for matching based chinese question answering. arXiv preprint arXiv:1902.09087. Wuwei Lan and Wei Xu. 2018. Neural network models for paraphrase identification, semantic textual similarity, natural language inference, and question answering. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3890–3902. Xiaoya Li, Yuxian Meng, Xiaofei Sun, Qinghong Han, Arianna Yuan, and Jiwei Li. 2019. Is word segmentation necessary for deep learning of chinese representations? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3242–3252. Xin Liu, Qingcai Chen, Chong Deng, Huajun Zeng, Jing Chen, Dongfang Li, and Buzhou Tang. 2018. Lcqmc: A large-scale chinese question matching corpus. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1952–1962. Ruixuan Luo, Jingjing Xu, Yi Zhang, Xuancheng Ren, and Xu Sun. 2019. Pkuseg: A toolkit for multi-domain chinese word segmentation. CoRR, abs/1906.11455. Diego Marcheggiani, Joost Bastings, and Ivan Titov. 2018. Exploiting semantics in neural machine translation with graph convolutional networks. arXiv preprint arXiv:1804.08313. Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similarity. In Thirtieth AAAI Conference on Artificial Intelligence. 6158 Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. 2009. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80. Yan Song, Shuming Shi, Jing Li, and Haisong Zhang. 2018. Directional skip-gram: Explicitly distinguishing left and right context for word embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 175–180. Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 4144–4150. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Jie Yang, Yue Zhang, and Shuailong Liang. 2019. Subword encoding in lattice lstm for chinese word segmentation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2720–2725. Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Graph convolutional networks for text classification. In Proceedings of AAAI, pages 7370–7377. Kai Yu, Lu Chen, Bo Chen, Kai Sun, and Su Zhu. 2014. Cognitive technology in task-oriented dialogue systems: Concepts, advances and future. Chinese Journal of Computers, 37(18):1–17. Yue Zhang and Jie Yang. 2018. Chinese ner using lattice lstm. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1554–1564. Su Zhu, Jieyu Li, Lu Chen, and Kai Yu. 2020. Efficient context and schema fusion networks for multidomain dialogue state tracking. arXiv preprint arXiv:2004.03386.
2020
547
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6159–6169 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6159 Neural Mixed Counting Models for Dispersed Topic Discovery Jiemin Wu1,∗, Yanghui Rao1,†, Zusheng Zhang1, Haoran Xie2, Qing Li3, Fu Lee Wang4, Ziye Chen1 1School of Data and Computer Science, Sun Yat-sen University, Guangzhou, China 2Department of Computing and Decision Sciences, Lingnan University, Hong Kong 3Department of Computing, The Hong Kong Polytechnic University, Hong Kong 4School of Science and Technology, The Open University of Hong Kong, Hong Kong [email protected], [email protected], [email protected], [email protected], [email protected], {zhangzsh3, chenzy35}@mail2.sysu.edu.cn Abstract Mixed counting models that use the negative binomial distribution as the prior can well model over-dispersed and hierarchically dependent random variables; thus they have attracted much attention in mining dispersed document topics. However, the existing parameter inference method like Monte Carlo sampling is quite time-consuming. In this paper, we propose two efficient neural mixed counting models, i.e., the Negative BinomialNeural Topic Model (NB-NTM) and the Gamma Negative Binomial-Neural Topic Model (GNB-NTM) for dispersed topic discovery. Neural variational inference algorithms are developed to infer model parameters by using the reparameterization of Gamma distribution and the Gaussian approximation of Poisson distribution. Experiments on real-world datasets indicate that our models outperform state-of-theart baseline models in terms of perplexity and topic coherence. The results also validate that both NB-NTM and GNB-NTM can produce explainable intermediate variables by generating dispersed proportions of document topics. 1 Introduction Mixture modeling is an essential topic in statistics and machine learning areas, owing to generating the random probability measure of data samples belonging to multiple clusters. In unsupervised learning tasks such as topic discovery, mixture modeling has gained increasing attention from researchers (Wang et al., 2011; Zhou and Carin, 2012, 2015; Zhou, 2018; Zhao et al., 2019). Specifically, mixture modeling over document words devotes to assign these words to different topics via random probability measures. Hierarchical Dirichlet ∗The first two authors contributed equally to this work which was finished when Jiemin Wu was an undergraduate student of his final year. †The corresponding author. Process (HDP) (Teh et al., 2004) is one of the representative methods in mixture modeling, which can characterize the two-level dependency of random probability measures. Although we can use Monte Carlo sampling or variational inference to estimate the parameters in HDP, it requires the help of indirect construction of random variables such as the Chinese Restaurant Franchise (Teh et al., 2004) or the Stick-Breaking construction (Wang et al., 2011) due to the lack of conjugation between the two-tier Dirichlet processes. This makes the inference of HDP mostly complicated (Zhou et al., 2016). The mixed counting models represented by the Negative Binomial (NB) process (Titsias, 2007) and the Gamma Negative Binomial (GNB) process (Zhou and Carin, 2012) have solved this problem to a certain extent, in which, the normalized GNB process has been proven to be equivalent to HDP (Zhou and Carin, 2012). Because both NB and GNB processes satisfy the properties of completely random measures (Charles Kingman, 1967), the generative process of random probability measures among various mixed components is independent and becomes straightforward. Moreover, they naturally introduce non-negative constraints and have been proven as able to model over-dispersed data. In the case of mining latent topics of documents, the over-dispersed property indicates that the variance is larger than the mean for document-topic distributions. When compared to the NB process, the GNB process has an extra feature of describing more flexible stochastic phenomena with hierarchical dependencies. Despite the above advantages, with the increase of data size and observable information, the aforementioned parameter inference method like Monte Carlo sampling or variational inference has gradually become an important factor limiting the usage scenarios of mixed counting models (Miao et al., 2016). The reason is that Monte Carlo sampling has a high computational 6160 cost, and variational inference becomes intractable when applied to models with complex variable dependencies (Acharya et al., 2015). Neural variational inference (NVI) is a flexible and fast parameter inference framework based on neural networks (Mnih and Gregor, 2014). It can be regarded as a generalization of variational autoencoder applicable to natural language processing tasks. Based on NVI, several neural topic models had been proposed and achieved encouraging performance in document modeling (Miao et al., 2016; Srivastava and Sutton, 2017; Miao et al., 2017). These models used the neural network to learn the distribution relationship between input documents and latent topics due to its excellent function fitting ability and scalability. Particularly, the neural network parameters can be trained by back-propagation through the reparameterization of a continuous distribution (Naesseth et al., 2017) or using variance reduction techniques for a discrete distribution (Mnih and Gregor, 2014). However, the hidden variables in the above neural topic models lack good interpretability, and it is also impossible to model over-dispersed and hierarchically dependent document sets for these methods. In this paper, we propose two novel neural mixed counting models dubbed the Negative BinomialNeural Topic Model (NB-NTM) and the Gamma Negative Binomial-Neural Topic Model (GNBNTM) based on NB and GNB processes, respectively. The general motivation is to combine the advantages of NVI and mixed counting models. On the one hand, NVI-based models are fast and easy to estimate but hard to interpret. On the other hand, document modeling via mixed counting models is easy to interpret but difficult to infer. In our NB-NTM and GNB-NTM, we develop NVI algorithms to infer parameters by using the reparameterization of Gamma distribution and the Gaussian approximation of Poisson distribution. Extensive experiments on real-world datasets validate the effectiveness of our proposed models in perplexity, topic coherence, and dispersed topic learning. Furthermore, the proposed models can describe the hierarchical dependence of random probability measures and introduce non-negative constraints, which renders the intermediate variables generated by our methods to have good interpretability. The remainder of this article is organized as follows. In Section 2, we summarize the related studies on topic discovery. In Section 3, we introduce the definitions and properties of background methods. The proposed models are described in Section 4, the experimental evaluations are shown in Section 5, and we draw the conclusions in Section 6. 2 Related Work Topic discovery aims to use the statistical information of word occurrences to obtain the abstract semantic structure embedded in a document set. From Bayesian methods represented by latent semantic analysis (LSA) (Deerwester et al., 1990), probabilistic latent semantic analysis (PLSA) (Hofmann, 1999), latent Dirichlet allocation (LDA) (Blei et al., 2003), and Hierarchical Dirichlet Process (HDP) (Teh et al., 2004), topic discovery had been widely researched in natural language processing and applied to many scenarios. For instance, the above models were extended to capture topic relevance (Blei and Lafferty, 2005) and topic evolution over time (Wang and McCallum, 2006; Blei and Lafferty, 2006). Algorithms for short text (Yan et al., 2013), tagged data (Ramage et al., 2009), and stream data (Yao et al., 2009) were also proposed. Considering the importance of prior distributions in LDA-based models, some research efforts tried to use beta and Gaussian distributions instead of the Dirichlet distribution as the prior of probabilistic graphical models (Thibaux and Jordan, 2007; Das et al., 2015). Although the Bayesian method is a natural way to represent the latent structure of a document set in topic discovery, as the structure of such a model becomes deeper and more complex, pure Bayesian inference becomes intractable due to the high dimensional integrals required (Miao et al., 2016). To address this issue, Cheng and Liu (2014) proposed a parallel Monte Carlo sampling method for HDP based on multi-threading. Unfortunately, it needs to traverse every word of all topics (i.e., threads) in the whole corpus when updating the topic-word distribution, rendering a large time cost for thread communication. With the development of deep learning, especially the introduction of NVI, there is a new direction to discover topics based on neural networks. For example, Miao et al. (2016) assumed that word distributions in each document could be represented by hidden variables sampled from multiple Gaussian distributions, and they used the variational lower bound as the objective function of their model named NVDM. Srivastava and Sutton (2017) employed the logical Gaussian distribution to approxi6161 mate the Dirichlet distribution, which improved the variational auto-encoder and LDA simultaneously. Miao et al. (2017) proposed a method named GSM to model the document-topic distribution explicitly. In their study, the topic-word distribution was introduced into the decoder. Besides the above NVIbased methods, Nalisnick and Smyth (2017) developed a stick-breaking variational auto-encoder for image generation. Nan et al. (2019) proposed a model named W-LDA in the Wasserstein autoencoder framework. They employed the Maximum Mean Discrepancy (MMD) in W-LDA to match the proposed distribution and the prior distribution. However, the accuracy of MMD relied heavily on the number of samples for each distribution, and the kernel function in MMD had a significant influence on the performance. By leveraging word embeddings, Gupta et al. (2019) proposed a neural autoregressive topic model dubbed iDocNADE to enrich the context of short text. Experiments indicate that iDocNADE outperformed state-of-the-art generative topic models. The recent relevant work to ours is the method proposed in (Zhao et al., 2019), which regarded the NB distribution as the prior in modeling the over-dispersed discrete data. However, the parameters of this method were still derived from the latent variables that obey the Gaussian distribution. Thus, these latent variables do not satisfy the non-negative constraint and lack good interpretability. Furthermore, the above method did not model topics explicitly, making it hard to generate document-topic and topic-word distributions. 3 Background 3.1 Negative Binomial Process Let X ∼NBP(G0, p) denote a NB process defined on the product space R+ × Ω, where G0 is a finite continuous basic measure on a completely separable measure space Ω, and p is a scale parameter. For each Borel set A ⊂Ω, we use X(A) to denote a count random variable describing the number of observations that reside within A. Then, X(A) obeys the NB distribution NB(G0(A), p). Given the kth component πk and its weight rk on Ω, if G0 is expressed as G0 = P∞ k=1 rkδπk, where δ is the Dirac delta function, then X ∼NBP(G0, p) can be expressed by X = P∞ k=1 nkδπk, where nk ∼NB(rk, p). The NB distribution m ∼NB(r, p) has a probability density function fM(m) = Γ(r+m) m!Γ(r) (1 − p)rpm, where Γ(·) denotes the gamma function. For the above probability density function, the mean and the variance are µ = r/(1 −p) and σ2 = rp/(1 −p)2 = µ + r−1µ2, respectively. Because the mean is smaller than the variance, i.e., the variance-to-mean ratio is greater than 1, NB distributions have shown great advantages in overdispersed data modeling (Zhou and Carin, 2012). Moreover, since the NB distribution m ∼NB(r, p) can be extended to a Gamma distribution and a Poisson distribution, i.e., m ∼Poisson(λ) and λ ∼Gamma(r, p/(1 −p), the NB process mentioned earlier can be extended to a Gamma-Poisson process (Zhou and Carin, 2015) as follows: X ∼ PP (Λ), and Λ ∼GaP (G0, (1 −p) /p), where PP(·) and GaP(·) denote the Poisson process and the Gamma process, respectively. The random probability measure corresponding to each mixed component in the NB process can be directly sampled from the NB distribution without resorting to the Chinese Restaurant Franchise, the StickBreaking, or other construction methods, because each random measure is independent of the others, i.e., the NB process is completely random. 3.2 Gamma Negative Binomial Process In the NB process, each Poisson process shares the same Gamma process prior with a fixed mean. Based on the NB process, the GNB process assigns another Gamma process as a prior to its mean, making it easier to model over-dispersed data (Zhou et al., 2016). Particularly, the generative process of random variables for the GNB process is as follows: G ∼GaP (G0, η), Λj ∼GaP (G, (1 −pj) /pj), and Xj ∼PP (Λj), where j is the subset index, η is the scale parameter of the first-level Gamma process, and the basic measure G0 in the NB process is replaced by another random measure G. It has been shown that HDP is a normalized form of the GNB process in (Zhou and Carin, 2012). However, unlike HDP, the GNB process explicitly introduces the parameter pj to control the dispersion degree of instantaneous measurement, making the latter model more flexible. 3.3 Neural Variational Inference NVI is often used as an efficient parameter inference framework for complex and deep-seated structural models. Inspired by the variational autoencoder, NVI assumes that the observed data d is subject to a certain probability distribution determined by a hidden variable h. In contrast to 6162 variational auto-encoders on handling the case of continuous latent variables (Kingma and Welling, 2014), NVI can deal with both discrete and continuous latent variables. Specifically, a neural network is used to infer the proposed distribution q(h|d). As stated in (Miao et al., 2017), Monte Carlo estimates of the gradient must be employed for models with discrete latent variables. In the case of q(h|d) being continuous, the hidden variable h is firstly obtained by sampling from q(h|d) through the corresponding reparameterization approach. Then, the likelihood p(d|h) is used to reproduce the observed data from hidden variables, and the objective is to minimize the KullbackLeibler (KL) divergence of the proposed distribution and the actual posterior distribution. Finally, the variational lower bound is obtained by L = Eq(h|d) log p (d|h) −DKL [q(h|d)∥p(h)], where the first term is the expectation of the loglikelihood, and the second one is the KL divergence between the inferred distribution and a predefined prior. To sum up, NVI first uses a neural network to infer the proposed distribution q(h|d), and then maximizes the variational lower bound by backpropagation to fit the actual posterior distribution p(h|d). Such a framework learns the distribution of input data well, enabling it to combine with the traditional probability graphical models (e.g., LDA) and infer model parameters quickly (Srivastava and Sutton, 2017). However, how to effectively integrate the distributed dependencies in mixed counting models into the framework of variational inference is still quite a challenging problem. 4 Proposed Models In this section, we respectively detail our NB-NTM and GNB-NTM for dispersed topic discovery. 4.1 Negative Binomial-Neural Topic Model With a NB process prior, we propose the NB-NTM to model the counting of document words. Furthermore, a novel NVI framework is developed for parameter inference. Let D = {d1, ..., d|D|} be the input with |D| documents and each document d ∈RV be a bag-of-words representation, where V is the vocabulary size. Since it is impossible to draw all the countably infinite atoms of a Gamma process, we first employ the finite truncation strategy, in which, a number of topics K (i.e., the truncated level) is set manually (Nalisnick and Smyth, 2017; Zhou, 2018). Note that although K is fixed, if K is set to be large enough, not necessarily all topics would be used and hence a truncated model still preserves its nonparametric ability; whereas if K is set to be small, asymmetric priors on the topic weights are also maintained (Zhou, 2018). Then we can express the generative process of NB-NTM for document d as follows: r = f1(d), p = f2(d), (1) λ ∼Gamma (r, p/ (1 −p)) , (2) n ∼Poisson (λ) , (3) where f1(·) and f2(·) are two multilayer perceptrons (MLPs) applying to generate the variational parameters r and p. Specifically, r is the component weight of G, i.e., the topic measure at the corpus level, and G = PK k=1 rkδπk. λ represents the weights of topics at the document level, which can be used to estimate the topic measure on d by Λ = PK k=1 λkδπk. In the above, λk denotes the kth component of λ. Finally, n is the component weight of Π that represents a Poisson process at the word level, and Π = PK k=1 nkδπk. The framework of NB-NTM is shown in Figure 1, and the parameter inference process is described as follows. Figure 1: Framework of NB-NTM. For the logarithmic likelihood of each document d, we can derive the variational lower bound by L = −DKL(q(λ|d)||p(λ)) + Eq(λ|d) hPNd i=1 log p(ωi|λ) i . In the above, q(λ|d) is the encoder’s inference of posterior probability, i.e., Gamma(r, p), ωi ∈RV is the one-hot representation of the word at the ith position, Nd is the number of words in document d, and p(λ) is the Gamma prior for λ, i.e., Gamma(ξ, c). The KL divergence between q(λ|d) and p(λ), i.e., Gamma(r, p) and Gamma(ξ, c), is calculated by following (Mathiassen et al., 2002): DKL(q(λ|d)||p(λ)) = PK k=1[(rk −1)Ψ(rk) − 6163 log pk−rk−log Γ(rk)−(ξ−1)(Ψ(rk)+log pk)+ log Γ(ξ)+ξ log c+ rkpk c ], where Ψ(·) is the Digamma function. The conditional probability over each word p(ωi|λ) is modeled by softmax function, as follows: p(ωi|λ) = exp{σ(nT Rωi+bi)} PV j=1 exp{σ(nT Rωj+bj)}, where R and b denote the weight matrix and the bias term, respectively. We present the parameter inference process of NB-NTM in Algorithm 1, in which, the variational lower bound L is used to calculate gradients and model parameters are updated by Adam (Kingma and Ba, 2015). Algorithm 1: Parameter Inference for NB-NTM Input: Number of topics K, gamma priors ξ and c, document set D; Output: Document-topic distribution θ, topic-word distribution φ. 1 repeat 2 for document d ∈D do 3 Compute gamma distribution parameters r = f1(d), p = f2(d); 4 Compute the KL divergence between Gamma(r, p) and Gamma(ξ, c); 5 for k ∈[1, K] do 6 Sample the Poisson distribution parameter by λk ∼Gamma(rk, pk/(1 −pk)); 7 Sample word numbers by nk ∼Poisson (λk); 8 end 9 for ωi ∈d do 10 Compute log-likelihood log p(ωi|λ); 11 end 12 Compute variational lower bound L; 13 Update f1(·), f2(·), R, and b; 14 end 15 until convergence; 16 for document d ∈D do 17 Normalize λ to obtain θd; 18 end 19 Apply softmax to R in row to obtain φ. 4.2 Gamma Negative Binomial-Neural Topic Model Based on the NB-NTM, we further propose the GNB-NTM by assigning another Gamma process as a prior to the NB process. As shown in Figure 2, the generative process of GNB-NTM for document d is given below: γ = f1(d), η = f2(d), (4) r ∼Gamma (γ, η) , (5) λ ∼Gamma (r, p/ (1 −p)) , (6) p = f3(d), n ∼Poisson (λ) . (7) In the above, γ and η are the parameters of the first-level Gamma process, and p is the scale parameter of the second-level Gamma process. The differences between GNB-NTM and NB-NTM are three-fold. Firstly, another Gamma process G0 is introduced over the existing Gamma process G as a prior of its shape parameter, so as to characterize the multi-level dependencies of random variables. In particular, G0 = PK k=1 γkδπk. Secondly, a scale parameter p is introduced for each document to describe the dispersion degree of all words in the document. Thirdly, the GNB-NTM employs n + r as the input of the decoder by following the production rule of the observed variable in (Zhou and Carin, 2012). Using n + r as the input also helps to incorporate the global topic information into the decoder’s inference of posterior probability q(r|d). Thus, the conditional probability over each word p(ωi|r) is modeled as follows: p(ωi|r) = exp{σ((n+r)T Rωi+bi)} PV j=1 exp{σ((n+r)T Rωj+bj)}. Figure 2: Framework of GNB-NTM, where both r and n are used as input in the decoder. Similar to NB-NTM, the variational lower bound is derived by: L = Eq(r|d) hPNd i=1 log p(ωi|r) i − DKL(q(r|d)||p(r)), where p(r) is the Gamma prior for r, i.e., Gamma(ξ, c). The parameter inference for GNB-NTM is presented in Algorithm 2. We use the variational lower bound to calculate gradients and apply Adam to update parameters of GNB-NTM, which are the same as NB-NTM. 4.3 Reparameterization Approach The Gamma and Poisson sampling operation cannot be differentiated, making it intractable to up6164 Algorithm 2: Parameter Inference for GNBNTM Input: Number of topics K, gamma priors ξ and c, document set D; Output: Document-topic distribution θ, topic-word distribution φ. 1 repeat 2 for document d ∈D do 3 Compute the 1st gamma distribution parameters γ = f1(d), η = f2(d); 4 Compute the KL divergence between Gamma(γ, η) and Gamma(ξ, c); 5 Compute the 2nd gamma distribution parameter p = f3(d); 6 for k ∈[1, K] do 7 Sample the 2nd gamma distribution parameter by rk ∼Gamma(γk, ηk); 8 Sample the Poisson distribution parameter by λk ∼Gamma(rk, p/(1 −p)); 9 Sample word numbers by nk ∼Poisson (λk); 10 end 11 for ωi ∈d do 12 Compute log-likelihood log p(ωi|r); 13 end 14 Compute variational lower bound L ; 15 Update f1(·), f2(·), f3(·), R, and b; 16 end 17 until convergence; 18 for document d ∈D do 19 Normalize λ to obtain θd; 20 end 21 Apply softmax to R in row to obtain φ. date model parameters through back-propagation. Here, we describe the reparameterization approach for smoothing gradients. For the Gamma distribution x ∼Gamma(α, β) with α > 1, the reparameterization can be obtained by the rejectsampling method (Naesseth et al., 2017), i.e., x = 1 β α −1 3   1 + ε √9α−3 3 , ϵ ∼N(0, 1). Besides, the shape augmentation method (Naesseth et al., 2017) is applied to convert α ≤1 to α > 1 to increase the accept rate of each rejection sampler. For the Poisson distribution which is discrete, we use the Gaussian distribution as an approximation (Rezende et al., 2014; Kingma and Welling, 2014). Based on the central limit theorem, N(µ = λ, σ2 = λ) can approximate Poisson(λ). Thus, we sample from the Poisson distribution directly to avoid the issue of discretization and use the Gaussian distribution as an approximation when calculating the Poisson distribution’s gradient. Particularly, the reparameterization of a Gaussian distribution x ∼N(µ, σ2) is x = µ + ϵ · σ, ϵ ∼N(0, 1). 5 Empirical Results 5.1 Datasets We employ the following three datasets to evaluate the effectiveness of our models: Reuters1, 20News, and MXM song lyrics (Miao et al., 2017). The Reuters dataset contains 7,758 training documents and 3,005 testing documents. The 20News corpus consists of 18,773 news articles under 20 categories. These news articles are divided into 11,268 training documents and 7,505 testing documents. The 20 categories include sports, electronics, automotive, and so forth, and the number of documents under each category is almost the same. MXM is the official lyrics collection of the Million Song Dataset, which contains 210,519 training documents and 27,143 testing documents, respectively. By following (Miao et al., 2017), we use the originally provided vocabulary with 5,000 words for MXM, while for Reuters and 20News, we use stemming, stop words filtering, and the 2,000 most frequently occurred words as vocabularies. The statistics of these datasets are presented in Table 1. Dataset Reuters 20News MXM Train.Size 7,758 11,268 210,519 Test.Size 3,005 7,505 27,143 Label number 90 20 Vocabulary size 2,000 2,000 5,000 Table 1: Statistics of the datasets. 5.2 Experimental Setup The following models are adopted as baselines: HDP (Teh et al., 2004), NVDM (Miao et al., 2016), NVLDA and ProdLDA (Srivastava and Sutton, 2017), GSM (Miao et al., 2017), and iDocNADE (Gupta et al., 2019). Among these baselines, HDP is a classical mixture modeling method followed 1https://www.nltk.org/book/ch02.html 6165 the equivalence with the normalized GNB process (Zhou and Carin, 2012). In HDP, the model parameters are estimated by Monte Carlo sampling. NVDM, NVLDA, ProdLDA, and GSM are all neural topic models based on NVI. Considering that word embeddings have shown to capture both the semantic and syntactic relatedness in words and demonstrated impressive performance in natural language processing tasks, we also present the result of a neural autoregressive topic model that leverages word embeddings (i.e., iDocNADE). Particularly, the publicly available codes of HDP2, NVDM3, NVLDA and ProdLDA4, and iDocNADE5 are directly used. As an extended model of NVDM, the baseline of GSM is implemented by us based on the code of NVDM. To ensure fair comparisons on various NVI-based methods, unless explicitly specified, we set the number of topics to 50, the hidden dimension of MLP to 256, and use one sample for NVI by following (Miao et al., 2017). For the batch size, the learning rate, and other model parameters, grid search is carried out on the training set to determine their optimal values and achieve the held-out performance. To evaluate the quality of topics generated by different models, we use perplexity and topic coherence as evaluation criteria. The perplexity of each model on a testing set eD is: perplexity ( eD) = exp  −1 | e D| P ed 1 Ned log p(ed)  , where log p(ed) represents the log-likelihood of the model on document ed, and Ned is the number of words in ed. The lower the perplexity is, the more likely for a model to generate eD. Therefore, if a model obtains a lower perplexity than others in the testing set, it can be considered as the better one. For all NVIbased topic models, the variational lower bound, which is proven to be the upper bound of perplexity (Mnih and Gregor, 2014), is used to calculate the perplexity by following (Miao et al., 2016, 2017). When calculating the topic coherence, we use the normalised pointwise mutual information (NPMI) which measures the relationship between word wi and other T −1 top words (Lau et al., 2014) as follows: NPMI (wi) = PT−1 j=1 [log P(wi,wj) P(wi)P(wj)/ − log P (wi, wj)]. The higher the value of topic coherence, the more explainable the topic is. 2https://github.com/soberqian/ TopicModel4J 3https://github.com/ysmiao/nvdm 4https://github.com/akashgit/ autoencoding_vi_for_topic_models 5https://github.com/pgcool/iDocNADEe 5.3 Performance Comparison Table 2 shows the perplexity and topic coherence of different models on the test datasets. We can observe that NB-NTM outperforms most baselines, and GNB-NTM performs the best in all cases. The results validate that the NB distribution can model over-dispersed documents well. Furthermore, the latent semantics of these corpora may be hierarchically dependent. In other words, the topics at the corpus level and those of each document are not independent but correlated with one another. Model Perplexity Topic coherence Reuters 20News MXM Reuters 20News MXM HDP 302.3 730.7 319.5 0.305 0.223 0.356 NVDM 224.9 855.0 252.3 0.133 0.138 0.109 NVLDA 578.6 1252.2 668.7 0.253 0.240 0.216 prodLDA 648.1 1267.2 852.9 0.332 0.329 0.313 GSM 266.2 963.5 330.5 0.192 0.211 0.177 iDocNDAE 202.8 844.6 294.7 0.130 0.151 0.222 NB-NTM 181.0 740.6 247.5 0.341 0.343 0.340 GNB-NTM 146.7 602.8 216.8 0.377 0.375 0.427 Table 2: Perplexity and topic coherence results, where the latter is an average of three coherence scores by calculating 5, 10, and 15 top words for each topic. In terms of the model efficiency, neural topic models can be trained much faster than HDP on a large corpus by GPU acceleration. Take the largescaled MXM dataset as an example, the training time of both NB-NTM and GNB-NTM is around one hour using a GeForce GTX 960 GPU, while HDP needs more than three hours to converge using an AMD R5 3600 CPU. Under the same environment, the training time of all NVI-based topic models is close. In general, NVLDA, prodLDA, and NVDM run slightly faster than NB-NTM because the Gaussian reparameterization approach is simpler than the Gamma one. GSM and GNBNTM are slightly slower than others because the former introduces more parameters to model the topic-word distribution, while the latter introduces more sampling operations. As an illustration, we also qualitatively evaluate the semantic information learned by different models on the 20News training set. The baselines of HDP, NVLDA, and prodLDA, which achieve competitive topic coherence scores, are selected for comparison. Table 3 presents 5 of the most representative topics with the corresponding top 10 words, from which we can observe that although all these models can identify the chosen topics reasonably, our NB-NTM and GNB-NTM perform better than the other baselines in most cases. 6166 Topic HDP NVLDA prodLDA NB-NTM GNB-NTM Religion god• heaven• shall belief• athos• who christ• worship religion• beliefs• people interpretation christians• athos• moral• atheism• scripture• religious• moral• truth• believe• christian• belief• scripture• church• religion• church• bible• jesus• christ• does truth• atheists• church• christian• atheists• lord• heaven• christian• jesus• his believe• acts christianity• christianity• evidence christianity• religions• god• belief• Encryption key• keys• encryption• rsa• cryptography• unit brad court agencies• crypto• keyboard chip clipper encrypted• encrypted• keys• crypto• semi cryptography• security• cable phone escrow• security• keys• lock• encryption• encrypted• scheme• nsa• fit cryptography• drugs government secure• cross agencies• gun escrow• key• back agency• criminal nsa• government women secure• criminals secure• agencies• Sport game• cup• players• player• hockey• fighting• toronto team• win• sport• four played• teams• play• game• games• patrick winning• boston games• almost wings ice detroit baseball• level players• him cup• fans• police rangers nhl• playoffs• season• co leafs season• players• teams• kill teams• hockey• season• wings effective baseball• leafs games• leafs Space its shuttle• commercial lunar• mission• earth• jpl• cryptography his algorithm organizations development mission• toronto nasa• first physics image orbit• chip high orbit• lunar• years• orbit• mission• rocket• processing mission• development shell• cost established dc solar• their energy rocket• year• space• such earth• remote national technology program space• soviet space• satellite• Hardware card• cable• mouse• floppy• cache• bit floppy• floppy• bus• vga• mac• dx card• ram• display• mb controller• lib printer• printer• memory• mb simms memory• interface• mhz pin• button• card• pc• ram• shipping printer• controller• dx monitor• brand meg motherboard• processor• speed drive ram• ide motherboard• bus• motherboard• motherboard• monitor• ram• Table 3: Top 10 words of 5 topics learned by different models on 20News, where • means the word is related to the corresponding topic by checking manually. 5.4 Impact of the Number of Topics In this part, we test the impact of the number of topics on the performance of our models. Figure 3 shows the convergence process of NB-NTM and GNB-NTM on the 20News training set with K = 20, 50, 100, 200 in terms of the perplexity. We can observe that as K increases, the perplexity values of both models decrease under each epoch. This is because the NVI framework is essentially an encoder-decoder, and the increase of the topic number enables the models to encode and reconstruct documents better. We also notice that with the continuous growth of K, the improvement of perplexity is getting lower. Table 4 presents the results of our models on the 20News testing set under the above conditions, in which a similar trend can be observed as aforementioned. 5.5 Evaluation on Learning Dispersed Topics Compared to the existing neural topic models, another feature of our models is that the generated 0 200 400 600 800 1000 epoch(s) 0 200 400 600 800 1000 1200 1400 NB-NTM Training Perplexity K = 20 K = 50 K = 100 K = 200 (a) NB-NTM 0 200 400 600 800 1000 epoch(s) 0 200 400 600 800 1000 1200 1400 GNB-NTM Training Perplexity K = 20 K = 50 K = 100 K = 200 (b) GNB-NTM Figure 3: The convergence behavior of our models with different numbers of topics on the 20News training set. Number of topics Perplexity Topic coherence NB-NTM GNB-NTM NB-NTM GNB-NTM 20 800.8 717.3 0.307 0.351 50 740.6 602.8 0.343 0.375 100 654.3 501.2 0.331 0.360 200 572.4 424.4 0.330 0.351 Table 4: Perplexity and topic coherence of our models on the 20News testing set with different topic numbers. topics are dispersed, and thus, the intermediate variables can be more explainable. To validate the effectiveness of our models on learning dispersed topics, we first count the total number of words under each manually labeled category (i.e., topic) as the topic-word number distribution shown in Figure 4 (a). Then we run our NB-NTM and GNB-NTM on the entire 20News testing set to get the corresponding values of r. After normalization, the proportion of different topics obtained by NB-NTM and GNB-NTM at the corpus level is presented in Figure 4 (b) and Figure 4 (c), respectively. For the convenience of the result presentation, we set the number of topics to 20 for both models. Note that the 20 topics do not need to correspond to the 20 categories, because we here focus on testing whether the topic proportions generated by our two models are in accordance with their model structures/characteristics. From these results, we can observe that the proportion of topics obtained by NB-NTM is close to the topic-word number distribution. On the other hand, GNB-NTM obtains more dispersed proportions of topics than NBNTM. These results suggest that GNB-NTM tends to allocate less but more important topics to the corpus, i.e., the topics generated by GNB-NTM are more discriminative. Since the document-topic distribution is not directly modeled and the Gaussian distribution samples are not non-negative, the previous neural methods except GSM cannot obtain explainable intermediate variables. For the baseline of GSM, Miao et al. (2017) had demonstrated that the topics with higher probabilities were evenly 6167 distributed on the same 20News dataset, which indicates that our models outperform GSM on learning dispersed document topics. 0 5 10 15 20 0.00 0.02 0.04 0.06 0.08 Proportion (a) 0 5 10 15 20 0.00 0.02 0.04 0.06 0.08 Proportion (b) 0 5 10 15 20 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 Proportion (c) Figure 4: Qualitative analysis on the model interpretability at the corpus level, where (a) is the topicword number distribution generated by the label information, (b) is the proportion of 20 topics obtained by NB-NTM, and (c) is the proportion of 20 topics obtained by GNB-NTM. All results are generated from the 20News testing set. We also study the dispersion of intermediate variables (i.e., topics) at the document level. By randomly select a document as an example, we get the normalized document topic weight λ from NB-NTM and GNB-NTM to explore whether the topic distributions of the document generated by our models are reasonable. As shown in Figure 5, the document is about a standard computer, and the most related topics with large topic distributions are all related to computers, which validates the practical meaning of intermediate variables of both NB-NTM and GNB-NTM at the document level. From the keywords in the most related topics, we further observe that GNB-NTM can identify more computer-related words than NB-NTM. When compared to the whole semantic space as shown in Figure 4, both NB-NTM and GNB-NTM generate more dispersed proportions of topics at the document level. This phenomenon is consistent with the over-dispersed feature (i.e., the variance is larger than the mean) of documents. 6 Conclusion In this paper, we present two neural mixed counting models named NB-NTM and GNB-NTM. Different from the current time consuming Bayesian methods, our models apply to large-scale datasets through the efficient back-propagation algorithm and GPU acceleration. When compared to the existing neural topic models, both NB-NTM and GNBNTM can well model the random variables with I have a Standard Computer 486DX2/66mhz EISA Tower with 16MB RAM, a Quantum 240MB Hard Drive, 1.2 and 1.44 MB floppies and a Colorado 250MB tape drive. I also have a Sound Blaster Pro and a 3COM Ethernet card (3C507) installed. The machine is completely stable in non-Turbo mode. In Turbo mode, Windows for Workgroups crashes or won't come up at all. If Windows does come up, I get General Protection Faults and Divide by Zero System Errors. Is there a problem with memory keeping up with the speed of the CPU on these machines? I have tried to reach Standard Computers, but their phones have been disconnected. Does anyone know what happened to this company? YAMOHS- Yet Another Mail Order Horror Story! I'd prefer e-mailed responses as I don't get to read this newsgroup often. Key words of most related topics: 8: serial server unix 17: meg drive isa 14: printer ram memory 5: bus isa dx 13: serve lebanese villages Key words of most unrelated topics: 6: hockey sport game 7: kids him took 10: population turks law 16: jobs going pay 20: ftp programs directory Key words of most related topics: 8: serial server unix 17: meg drive isa 14: printer ram memory 5: bus isa dx 13: serve lebanese villages Key words of most unrelated topics: 6: hockey sport game 7: kids him took 10: population turks law 16: jobs going pay 20: ftp programs directory Key words of most related topics: 4: computer scsi space 18: serial algorithm installed 7: anybody really windows 20: mb card windows 1: drivers key car Key words of most unrelated topics: 15: secure truth Citizens 16: christ church turks 19: jewish arab religion 6: riding truth moral 5: baseball season games Key words of most related topics: 4: computer scsi space 18: serial algorithm installed 7: anybody really windows 20: mb card windows 1: drivers key car Key words of most unrelated topics: 15: secure truth Citizens 16: christ church turks 19: jewish arab religion 6: riding truth moral 5: baseball season games Figure 5: The proportion and key words of 20 topics obtained by our models on a document instance. over-dispersed and hierarchically dependent characteristics. Extensive experiments on real-world datasets validate the effectiveness of our models in terms of perplexity, topic coherence, and producing explainable intermediate variables by generating dispersed proportions of document topics. The results also indicate that NB distribution families can characterize text data aptly, which is essentially due to their conformity with the over-dispersed and sparse properties of natural language. Acknowledgment We are grateful to the reviewers for their constructive comments and suggestions on this study. This work has been supported by the National Natural Science Foundation of China (61972426), Guangdong Basic and Applied Basic Research Foundation (2020A1515010536), HKIBS Research Seed Fund 2019/20 (190-009), the Research Seed Fund (102367), and LEO Dr David P. Chan Institute of Data Science of Lingnan University, Hong Kong. This work has also been supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (UGC/FDS16/E01/19), Hong Kong Research Grants Council through a General Research Fund (project no. PolyU 1121417), and by the Hong Kong Polytechnic University through a start-up fund (project no. 980V). 6168 References Ayan Acharya, Joydeep Ghosh, and Mingyuan Zhou. 2015. Nonparametric bayesian factor analysis for dynamic count matrices. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics. David M. Blei and John D. Lafferty. 2005. Correlated topic models. In Proceedings of the 19th Annual Conference on Neural Information Processing Systems, pages 147–154. David M. Blei and John D. Lafferty. 2006. Dynamic topic models. In Proceedings of the 23rd International Conference on Machine Learning, pages 113– 120. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. John Frank Charles Kingman. 1967. Completely random measures. Pacific Journal of Mathematics, 21(1):59–78. Dehua Cheng and Yan Liu. 2014. Parallel gibbs sampling for hierarchical dirichlet processes via gamma processes equivalence. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 562–571. Rajarshi Das, Manzil Zaheer, and Chris Dyer. 2015. Gaussian lda for topic models with word embeddings. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, pages 795–804. Scott C. Deerwester, Susan T. Dumais, Thomas K. Landauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391–407. Pankaj Gupta, Yatin Chaudhary, Florian Buettner, and Hinrich Sch¨utze. 2019. Document informed neural autoregressive topic models with distributional prior. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, pages 6505–6512. Thomas Hofmann. 1999. Probabilistic latent semantic analysis. In Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence, pages 289– 296. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations. Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In Proceedings of the 2nd International Conference on Learning Representations. Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 530–539. John Reidar Mathiassen, Amund Skavhaug, and Ketil Bø. 2002. Texture similarity measure using kullback-leibler divergence between gamma distributions. In Proceedings of the 7th European Conference on Computer Vision, pages 133–147. Yishu Miao, Edward Grefenstette, and Phil Blunsom. 2017. Discovering discrete latent topics with neural variational inference. In Proceedings of the 34th International Conference on Machine Learning, pages 2410–2419. Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In Proceedings of the 33nd International Conference on Machine Learning, pages 1727–1736. Andriy Mnih and Karol Gregor. 2014. Neural variational inference and learning in belief networks. In Proceedings of the 31th International Conference on Machine Learning, pages 1791–1799. Christian A. Naesseth, Francisco J. R. Ruiz, Scott W. Linderman, and David M. Blei. 2017. Reparameterization gradients through acceptance-rejection sampling algorithms. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pages 489–498. Eric T. Nalisnick and Padhraic Smyth. 2017. Stickbreaking variational autoencoders. In Proceedings of the 5th International Conference on Learning Representations. Feng Nan, Ran Ding, Ramesh Nallapati, and Bing Xiang. 2019. Topic modeling with wasserstein autoencoders. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 6345–6381. Daniel Ramage, David Hall, Ramesh Nallapati, and Christopher D. Manning. 2009. Labeled LDA: A supervised topic model for credit attribution in multilabeled corpora. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 248–256. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31th International Conference on Machine Learning, pages 1278–1286. Akash Srivastava and Charles A. Sutton. 2017. Autoencoding variational inference for topic models. In Proceedings of the 5th International Conference on Learning Representations. 6169 Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. 2004. Sharing clusters among related groups: Hierarchical dirichlet processes. In Proceedings of the 18th Annual Conference on Neural Information Processing Systems, pages 1385–1392. Romain Thibaux and Michael I. Jordan. 2007. Hierarchical beta processes and the indian buffet process. In Proceedings of the 11th International Conference on Artificial Intelligence and Statistics, pages 564– 571. Michalis K. Titsias. 2007. The infinite gamma-poisson feature model. In Proceedings of the 21st Annual Conference on Neural Information Processing Systems, pages 1513–1520. Chong Wang, John W. Paisley, and David M. Blei. 2011. Online variational inference for the hierarchical dirichlet process. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, pages 752–760. Xuerui Wang and Andrew McCallum. 2006. Topics over time: a non-markov continuous-time model of topical trends. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 424–433. Xiaohui Yan, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. 2013. A biterm topic model for short texts. In Proceedings of the 22nd International World Wide Web Conference, pages 1445–1456. Limin Yao, David M. Mimno, and Andrew McCallum. 2009. Efficient methods for topic model inference on streaming document collections. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 937–946. He Zhao, Piyush Rai, Lan Du, Wray L. Buntine, and Mingyuan Zhou. 2019. Variational autoencoders for sparse and overdispersed discrete data. arXiv preprint arXiv:1905.00616, abs/1905.00616. Mingyuan Zhou. 2018. Nonparametric bayesian negative binomial factor analysis. Bayesian Analysis, 13(4):1061–1089. Mingyuan Zhou and Lawrence Carin. 2012. Augmentand-conquer negative binomial processes. In Proceedings of the 26th Annual Conference on Neural Information Processing Systems, pages 2555–2563. Mingyuan Zhou and Lawrence Carin. 2015. Negative binomial process count and mixture modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(2):307–320. Mingyuan Zhou, Oscar Hernan Madrid Padilla, and James G Scott. 2016. Priors for random count matrices derived from a family of negative binomial processes. Journal of the American Statistical Association, 111(515):1144–1156.
2020
548
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6170–6180 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6170 Reasoning Over Semantic-Level Graph for Fact Checking Wanjun Zhong1∗, Jingjing Xu3∗, Duyu Tang2, Zenan Xu1, Nan Duan2, Ming Zhou2 Jiahai Wang1 and Jian Yin1 1 The School of Data and Computer Science, Sun Yat-sen University. Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou, P.R.China 2 Microsoft Research 3 MOE Key Lab of Computational Linguistics, Peking University {zhongwj25@mail2,xuzn@mail2}.sysu.edu.cn {wangjiah@mail,issjyin@mail}.sysu.edu.cn {dutang,nanduan,mingzhou}@microsoft.com; [email protected] Abstract Fact checking is a challenging task because verifying the truthfulness of a claim requires reasoning about multiple retrievable evidence. In this work, we present a method suitable for reasoning about the semantic-level structure of evidence. Unlike most previous works, which typically represent evidence sentences with either string concatenation or fusing the features of isolated evidence sentences, our approach operates on rich semantic structures of evidence obtained by semantic role labeling. We propose two mechanisms to exploit the structure of evidence while leveraging the advances of pre-trained models like BERT, GPT or XLNet. Specifically, using XLNet as the backbone, we first utilize the graph structure to re-define the relative distances of words, with the intuition that semantically related words should have short distances. Then, we adopt graph convolutional network and graph attention network to propagate and aggregate information from neighboring nodes on the graph. We evaluate our system on FEVER, a benchmark dataset for fact checking, and find that rich structural information is helpful and both our graph-based mechanisms improve the accuracy. Our model is the state-of-the-art system in terms of both official evaluation metrics, namely claim verification accuracy and FEVER score. 1 Introduction Internet provides an efficient way for individuals and organizations to quickly spread information to massive audiences. However, malicious people spread false news, which may have significant influence on public opinions, stock prices, even presidential elections (Faris et al., 2017). Vosoughi et al. (2018) show that false news reaches more people ∗Work done while this author was an intern at Microsoft Research. Claim: The Rodney King riots took place in the most populous county in the USA. Evidence #1: The 1992 Los Angeles riots, also known as the Rodney King riots were a series of riots, lootings, arsons, and civil disturbances that occurred in Los Angeles County, California in April and May 1992. Evidence #2: Los Angeles County, officially the County of Los Angeles, is the most populous county in the USA. Fact knowledge extracted from evidence sentences 1 2 3 4 5 Figure 1: A motivating example for fact checking and the FEVER task. Verifying the claim requires understanding the semantic structure of multiple evidence sentences and the reasoning process over the structure. than the truth. The situation is more urgent as advanced pre-trained language models (Radford et al., 2019) can produce remarkably coherent and fluent texts, which lowers the barrier for the abuse of creating deceptive content. In this paper, we study fact checking with the goal of automatically assessing the truthfulness of a textual claim by looking for textual evidence. Previous works are dominated by natural language inference models (Dagan et al., 2013; Angeli and Manning, 2014) because the task requires reasoning of the claim and retrieved evidence sentences. They typically either concatenate evidence sentences into a single string, which is used in top systems in the FEVER challenge (Thorne et al., 2018b), or use feature fusion to aggregate the features of isolated evidence sentences (Zhou et al., 2019). However, both methods fail to capture rich semantic-level structures among multiple evidence, which also prevents the use of deeper reasoning model for fact checking. In Figure 1, we give a motivating example. Making the correct prediction requires a model to reason based on the understanding that “Rodney King riots” is occurred in “Los Angeles County” from the first evidence, and that “Los Angeles County” is “the most populous county 6171 in the USA” from the second evidence. It is therefore desirable to mine the semantic structure of evidence and leverage it to verify the truthfulness of the claim. Under the aforementioned consideration, we present a graph-based reasoning approach for fact checking. With a given claim, we represent the retrieved evidence sentences as a graph, and then use the graph structure to guide the reasoning process. Specifically, we apply semantic role labeling (SRL) to parse each evidence sentence, and establish links between arguments to construct the graph. When developing the reasoning approach, we intend to simultaneously leverage rich semantic structures of evidence embodied in the graph and powerful contextual semantics learnt in pre-trained models like BERT (Devlin et al., 2018), GPT (Radford et al., 2019) and XLNet (Yang et al., 2019). To achieve this, we first re-define the distance between words based on the graph structure when producing contextual representations of words. Furthermore, we adopt graph convolutional network and graph attention network to propagate and aggregate information over the graph structure. In this way, the reasoning process employs semantic representations at both word/sub-word level and graph level. We conduct experiments on FEVER (Thorne et al., 2018a), which is one of the most influential benchmark datasets for fact checking. FEVER consists of 185,445 verified claims, and evidence sentences for each claim are natural language sentences from Wikipedia. We follow the official evaluation protocol of FEVER, and demonstrate that our approach achieves state-of-the-art performance in terms of both claim classification accuracy and FEVER score. Ablation study shows that the integration of graph-driven representation learning mechanisms improves the performance. We briefly summarize our contributions as follows. • We propose a graph-based reasoning approach for fact checking. Our system apply Semantic Role Labeling (SRL) to construct graphs and present two graph-driven representation learning mechanisms. • Results verify that both graph-based mechanisms improve the accuracy, and our final system achieves state-of-the-art performance on the FEVER dataset. 2 Task Definition and Pipeline With a textual claim given as the input, the problem of fact checking is to find supporting evidence sentences to verify the truthfulness of the claim. We conduct our research on FEVER (Thorne et al., 2018a), short for Fact Extraction and VERification, a benchmark dataset for fact checking. Systems are required to retrieve evidence sentences from Wikipedia, and predict the claim as “SUPPORTED”, “REFUTED” or “NOT ENOUGH INFO (NEI)”, standing for that the claim is supported by the evidence, refuted by the evidence, and is not verifiable, respectively. There are two official evaluation metrics in FEVER. The first is the accuracy for three-way classification. The second is FEVER score, which further measures the percentage of correct retrieved evidence for “SUPPORTED” and “REFUTED” categories. Both the statistic of FEVER dataset and the equation for calculating FEVER score are given in Appendix B. Our Pipeline claim Document Selection documents Sentence Selection sentences Claim Verification evidence SUPPORTED | REFUTED | NOTENOUGHINFO Figure 2: Our pipeline for fact checking on FEVER. The main contribution of this work is a graph-based reasoning model for claim verification. Here, we present an overview of our pipeline for FEVER, which follows the majority of previous studies. Our pipeline consists of three main components: a document retrieval model, a sentence-level evidence selection model, and a claim verification model. Figure 2 gives an overview of the pipeline. With a given claim, the document retrieval model retrieves the most related documents from a given collection of Wikipedia documents. With retrieved documents, the evidence selection model selects top-k related sentences as the evidence. Finally, the claim verification model takes the claim and evidence sentences as the input and outputs the veracity of the claim. The main contribution of this work is the graphbased reasoning approach for claim verification, which is explained detailedly in Section 3. Our 6172 Evidence #1: The 1992 Los Angeles riots, also known as the Rodney King riots were a series of riots, lootings, arsons, and civil disturbances that occurred in Los Angeles County, California in April and May 1992. Evidence #2: Los Angeles County, officially the County of Los Angeles, is the most populous county in the USA. VERB is ARG1 Los Angeles County, officially the County of Los Angeles ARG2 the most populous county in the USA SRL results with verb “is” VERB known ARG1 ARG2 ADVERBIAL also SRL results with verb “known” as the Rodney King riots The 1992 Los Angeles riots VERB occurred ARG1 riots, lootings, arsons, and civil disturbances LOCATION In Los Angeles County, California TEMPORAL SRL results with verb “occurred” in April and May 1992 Graph Construction Figure 3: The constructed graph for the motivating example with two evidence sentences. Each box describes a “tuple” which is extracted by SRL triggered by a verb. Blue solid lines indicate edges that connect arguments within a tuple and red dotted lines indicate edges that connect argument across different tuples. strategies for document selection and evidence selection are described in Section 4. 3 Graph-Based Reasoning Approach In this section, we introduce our graph-based reasoning approach for claim verification, which is the main contribution of this paper. Taking a claim and retrieved evidence sentences1 as the input, our approach predicts the truthfulness of the claim. For FEVER, it is a three-way classification problem, which predicts the claim as “SUPPORTED”, “REFUTED” or “NOT ENOUGH INFO (NEI)”. The basic idea of our approach is to employ the intrinsic structure of evidence to assess the truthfulness of the claim. As shown in the motivating example in Figure 1, making the correct prediction needs good understanding of the semantic-level structure of evidence and the reasoning process based on that structure. In this section, we first describe our graph construction module (§3.1). Then, we present how to apply graph structure for fact checking, including a contextual representation learning mechanism with graph-based distance calculation (§3.2), and graph convolutional network and graph attention network to propagate and aggregate information over the graph (§3.3 and §3.4). 3.1 Graph Construction Taking evidence sentences as the input, we would like to build a graph to reveal the intrinsic structure of these evidence. There might be many different 1Details about how to retrieve evidence for a claim are described in Section 4. ways to construct the graph, such as open information extraction (Banko et al., 2007), named entity recognition plus relation classification, sequenceto-sequence generation which is trained to produce structured tuples (Goodrich et al., 2019), etc. In this work, we adopt a practical and flexible way based on semantic role labeling (Carreras and M`arquez, 2004). Specifically, with the given evidence sentences, our graph construction operates in the following steps. • For each sentence, we parse it to tuples2 with an off-the-shelf SRL toolkit developed by AllenNLP3, which is a re-implementation of a BERT-based model (Shi and Lin, 2019). • For each tuple, we regard its elements with certain types as the nodes of the graph. We heuristically set those types as verb, argument, location and temporal, which can also be easily extended to include more types. We create edges for every two nodes within a tuple. • We create edges for nodes across different tuples to capture the structure information among multiple evidence sentences. Our idea is to create edges for nodes that are literally similar with each other. Assuming entity A and entity B come from different tuples, we add one edge if one of the following conditions is satisfied: (1) A equals B; (2) A contains B; (3) the number of overlapped words 2A sentence could be parsed as multiple tuples. 3https://demo.allennlp.org/ semantic-role-labeling 6173 p g claim … [SEP] … sentence 1 … sentence 2 … XLNet with Graph Distance take place The Rodney King riots in the most populous county in the USA Graph Convolutional Network Graph Convolutional Network Graph Attention output as the Rodney King riots The 1992 Los Angeles riots in Los Angeles County, California the most populous county in the USA. … … Los Angeles County, officially … is known also Figure 4: An overview of our graph-based reasoning approach for claim verification. Taking a claim and evidence sentences as the input, we first calculate contextual word representations with graph-based distance (§3.2). After that, we use graph convolutional network to propagate information over the graph (§3.3), and use graph attention network to aggregate information (§3.4) before making the final prediction. between A and B is larger than the half of the minimum number of words in A and B. Figure 3 shows the constructed graph of the evidence in the motivating example. In order to obtain the structure information of the claim, we use the same pipeline to represent a claim as a graph. Our graph construction module offers an approach on modeling structure of multiple evidence, which could be further developed in the future. 3.2 Contextual Word Representations with Graph Distance We describe the use of graph for learning graphenhanced contextual representations of words4. Our basic idea is to shorten the distance between two semantically related words on the graph, which helps to enhance their relationship when we calculate contextual word representations with a Transformer-based (Vaswani et al., 2017) pretrained model like BERT and XLNet. Supposing we have five evidence sentences {s1, s2, ... s5} and the word w1i from s1 and the word w5j from s5 are connected on the graph, simply concatenating evidence sentences as a single string fails to capture their semantic-level structure, and would give a large distance to w1i and w5j, which is the number of words between them across other three sentences (i.e., s2, s3, and s4). An intuitive way to achieve our goal is to define an N × N matrix of distances of words along the graph, where N is the total number of words in the evidence. However, this is unacceptable in practice because the 4In Transformer-based representation learning pipeline, the basic computational unit can also be word-piece. For simplicity, we use the term “word” in this paper. representation learning procedure will take huge memory space, which is also observed by Shaw et al. (2018). In this work, we adopt pre-trained model XLNet (Yang et al., 2019) as the backbone of our approach because it naturally involves the concept of relative position5. Pre-trained models capture rich contextual representations of words, which is helpful for our task which requires sentence-level reasoning. Considering the aforementioned issues, we implement an approximate solution to trade off between the efficiency of implementation and the informativeness of the graph. Specifically, we reorder evidence sentences with a topology sort algorithm with the intuition that closely linked nodes should exist in neighboring sentences. This would prefer that neighboring sentences contain either parent nodes or sibling nodes, so as to better capture the semantic relatedness between different evidence sentences. We present our implementation in Appendix A. The algorithm begins from nodes without incident relations. For each node without incident relations, we recursively visit its child nodes in a depth-first searching way. After obtaining graph-based relative position of words, we feed the sorted sequence into XLNet to obtain the contextual representations. Meanwhile, we obtain the representation h([CLS]) for a special token [CLS], which stands for the joint representation of the claim and the evidence in Transformer-based architecture. 5Our approach can also be easily adapted to BERT by adding relative position like Shaw et al. (2018). 6174 3.3 Graph Convolutional Network We have injected the graph information in Transformer and obtained h([CLS]), which captures the semantic interaction between the claim and the evidence at word level 6. As shown in our motivating example in Figure 1 and the constructed graph in Figure 3, the reasoning process needs to operate on span/argument-level, where the basic computational unit typically consists of multiple words like “Rodney King riots” and “the most popular county in the USA”. To further exploit graph information beyond word level, we first calculate the representation of a node, which is a word span in the graph, by averaging the contextual representations of words contained in the node. After that, we employ multilayer graph convolutional network (GCNs) (Kipf and Welling, 2016) to update the node representation by aggregating representations from their neighbors on the graph. Formally, we denote G as the graph constructed by the previous graph construction method and make H ∈RNv×d a matrix containing representation of all nodes, where Nv and d denote the number of nodes and the dimension of node representations, respectively. Each row Hi ∈Rd is the representation of node i. We introduce an adjacency matrix A of graph G and its degree matrix D, where we add self-loops to matrix A and Dii = P j Aij. One-layer GCNs will aggregate information through one-hop edges, which is calculated as follows: H(1) i = ρ( eAHiW0), (1) where H(1) i ∈Rd is the new d-dimension representation of node i, eA = D−1 2 AD−1 2 is the normalized symmetric adjacency matrix, W0 is a weight matrix, and ρ is an activation function. To exploit information from the multi-hop neighboring nodes, we stack multiple GCNs layers: H(j+1) i = ρ( eAH(j) i Wj), (2) where j denotes the layer number and H0 i is the initial representation of node i initialized from the contextual representation. We simplify H(k) as H for later use, where H indicates the representation of all nodes updated by k-layer GCNs. 6By “word” in “word-level”, we mean the basic computational unit in XLNet, and thus h([CLS]) capture the sophisticated interaction between words via multi-layer multi-head attention operations. The graph learning mechanism will be performed separately for claim-based and evidencebased graph. Therefore, we denote Hc and He as the representations of all nodes in claim-based graph and evidence-based graphs, respectively. Afterwards, we utilize the graph attention network to align the graph-level node representation learned for two graphs before making the final prediction. 3.4 Graph Attention Network We explore the related information between two graphs and make semantic alignment for final prediction. Let He ∈RNv e ×d and Hc ∈RNv c ×d denote matrices containing representations of all nodes in evidence-based and claim-based graph respectively, where Nv e and Nv c denote number of nodes in the corresponding graph. We first employ a graph attention mechanism (Veliˇckovi´c et al., 2017) to generate a claim-specific evidence representation for each node in claimbased graph. Specifically, we first take each hi c ∈ Hc as query, and take all node representations hj e ∈ He as keys. We then perform graph attention on the nodes, an attention mechanism a : Rd ×Rd → R to compute attention coefficient as follows: eij = a(Wchi c, Wehj e) (3) which means the importance of evidence node j to the claim node i. Wc ∈RF×d and We ∈RF×d is the weight matrix and F is the dimension of attention feature. We use the dot-product function as a here. We then normalize eij using the softmax function: αij = softmaxj(eij) = exp(eij) P k∈Nve exp(eik) (4) After that, we calculate a claim-centric evidence representation X = [x1, . . . , xNvc ] using the weighted sum over He: xi = X j∈Nve αijhj e (5) We then perform node-to-node alignment and calculate aligned vectors A = [a1, . . . , aNvc ] by the claim node representation Hc and the claimcentric evidence representation X, ai = falign(hi c, xi), (6) where falign() denotes the alignment function. Inspired by Shen et al. (2018), we design our alignment function as: falign(x, y) = Wa[x, y, x −y, x ⊙y], (7) 6175 where Wa ∈Rd×4∗d is a weight matrix and ⊙is element-wise Hadamard product. The final output g is obtained by the mean pooling over A. We then feed the concatenated vector of g and the final hidden vector h([CLS]) from XLNet through a MLP layer for the final prediction. 4 Document Retrieval and Evidence Selection In this section, we briefly describe our document retrieval and evidence selection components to make the paper self contained. 4.1 Document Retrieval The document retrieval model takes a claim and a collection of Wikipedia documents as the input, and returns m most relevant documents. We mainly follow Nie et al. (2019), the topperforming system on the FEVER shared task (Thorne et al., 2018b). The document retrieval model first uses keyword matching to filter candidate documents from the massive Wikipedia documents. Then, NSMN (Nie et al., 2019) is applied to handle the documents with disambiguation titles, which are 10% of the whole documents. Documents without disambiguation title are assigned with higher scores in the resulting list. The input to the NSMN model includes the claim and candidate documents with disambiguation title. At a high level, NSMN model has encoding, alignment, matching and output layers. Readers who are interested are recommended to refer to the original paper for more details. Finally, we select top-10 documents from the resulting list. 4.2 Sentence-Level Evidence Selection Taking a claim and all the sentences from retrieved documents as the input, evidence selection model returns the top-k most relevant sentences. We regard evidence selection as a semantic matching problem, and leverage rich contextual representations embodied in pre-trained models like XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019a) to measure the relevance of a claim to every evidence candidate. Let’s take XLNet as an example. The input of the sentence selector is cei = [Claim, SEP, Evidencei, SEP, CLS] where Claim and Evidencei indicate tokenized word-pieces of original claim and ith evidence candidate, d denotes the dimension of hidden vector, and SEP and CLS are symbols indicating ending of a sentence and ending of a whole input, respectively. The final representation hcei ∈Rd is obtained via extracting the hidden vector of the CLS token. After that, we employ an MLP layer and a softmax layer to compute score s+ cei for each evidence candidate. Then, we rank all the evidence sentences by score s+ cei. The model is trained on the training data with a standard cross-entropy loss. Following the official setting in FEVER, we select top-5 evidence sentences. The performance of our evidence selection model is shown in Appendix C. 5 Experiments We evaluate on FEVER (Thorne et al., 2018a), a benchmark dataset for fact extraction and verification. Each instance in FEVER dataset consists of a claim, groups of ground-truth evidence from Wikipedia and a label (i.e., “SUPPORTED”, “REFUTED” or “NOT ENOUGH INFO (NEI)”), indicating its veracity. FEVER includes a dump of Wikipedia, which contains 5,416,537 pre-processed documents. The two official evaluation metrics of FEVER are label accuracy and FEVER score, as described in Section 2. Label accuracy is the primary evaluation metric we apply for our experiments because it directly measures the performance of the claim verification model. We also report FEVER score for comparison, which measures whether both the predicted label and the retrieved evidence are correct. No evidence is required if the predicted label is NEI. 5.1 Baselines We compare our system to the following baselines, including three top-performing systems on FEVER shared task, a recent work GEAR (Zhou et al., 2019), and a concurrent work by Liu et al. (2019b). • Nie et al. (2019) employ a semantic matching neural network for both evidence selection and claim verification. • Yoneda et al. (2018) infer the veracity of each claim-evidence pair and make final prediction by aggregating multiple predicted labels. • Hanselowski et al. (2018) encode each claimevidence pair separately, and use a pooling function to aggregate features for prediction. 6176 Method Label FEVER Acc (%) Score (%) Hanselowski et al. (2018) 65.46 61.58 Yoneda et al. (2018) 67.62 62.52 Nie et al. (2019) 68.21 64.21 GEAR (Zhou et al., 2019) 71.60 67.10 KGAT (Liu et al., 2019b) 72.81 69.40 DREAM (our approach) 76.85 70.60 Table 1: Performance on the blind test set on FEVER. Our approach is abbreviated as DREAM. • GEAR (Zhou et al., 2019) uses BERT to obtain claim-specific representation for each evidence sentence, and applies graph network by regarding each evidence sentence as a node in the graph. • KGAT (Liu et al., 2019b) is concurrent with our work, which regards sentences as the nodes of a graph and uses Kernel Graph Attention Network to aggregate information. 5.2 Model Comparison Table 1 reports the performance of our model and baselines on the blind test set with the score showed on the public leaderboard7. As shown in Table 1, in terms of label accuracy, our model significantly outperforms previous systems with 76.85% on the test set. It is worth noting that, our approach, which exploits explicit graph-level semantic structure of evidence obtained by SRL, outperforms GEAR and KGAT, both of which regard sentences as the nodes and use model to learn the implicit structure of evidence 8. By the time our paper is submitted, our system achieves state-of-the-art performance in terms of both evaluation metrics on the leaderboard. 5.3 Ablation Study Table 2 presents the label accuracy on the development set after eliminating different components (including the graph-based relative distance (§3.2) and graph convolutional network and graph attention network (§3.3 and §3.4) separately in our model. 7The public leaderboard for perpetual evaluation of FEVER is https://competitions.codalab.org/ competitions/18814#results. DREAM is our user name on the leaderboard. 8We don’t overclaim that the superiority of our system to GEAR and KGAT only comes from the explicit graph structure, because we have differences in other components like sentence selection and the pre-trained model. Model Label Accuracy DREAM 79.16 -w/o Relative Distance 78.35 -w/o GCN&GAN 77.12 -w/o both above modules 75.40 Table 2: Ablation study on develop set. The last row in Table 2 corresponds to the baseline where all the evidence sentences are simply concatenated as a single string, where no explicit graph structure is used at all for fact verification. As shown in Table 2, compared to the XLNet baseline, incorporating both graph-based modules brings 3.76% improvement on label accuracy. Removing the graph-based distance drops 0.81% in terms of label accuracy. The graph-based distance mechanism can shorten the distance of two closelylinked nodes and help the model to learn their dependency. Removing the graph-based reasoning module drops 2.04% because graph reasoning module captures the structural information and performs deep reasoning about that. Figure 5 gives a case study of our approach. 5.4 Error Analysis We randomly select 200 incorrectly predicted instances and summarize the primary types of errors. The first type of errors is caused by failing to match the semantic meaning between phrases that describe the same event. For example, the claim states “Winter’s Tale is a book”, while the evidence states “Winter ’s Tale is a 1983 novel by Mark Helprin”. The model fails to realize that “novel” belongs to “book” and states that the claim is refuted. Solving this type of errors needs to involve external knowledge (e.g. ConceptNet (Speer et al., 2017)) that can indicate logical relationships between different events. The misleading information in the retrieved evidence causes the second type of errors. For example, the claim states “The Gifted is a movie”, and the ground-truth evidence states “The Gifted is an upcoming American television series”. However, the retrieved evidence also contains “The Gifted is a 2014 Filipino dark comedy-drama movie”, which misleads the model to make the wrong judgment. 6 Related Work In general, fact checking involves assessing the truthfulness of a claim. In literature, a claim can be 6177 1 Claim Text: Congressional Space Medal of Honor is the highest award given only to astronauts by NASA. Tuples: ('Congressional Space Medal of Honor', 'is', 'the highest award given only to astronauts by NASA’) ('the highest award’, 'given','only', 'to astronauts', 'by NASA') Evidence #1 Text: The highest award given by NASA , Congressional Space Medal of Honor is awarded by the President of the United States in Congress 's name on recommendations from the Administrator of the National Aeronautics and Space Administration . Tuples: ('The highest award','given','by NASA’) ('Congressional Space Medal of Honor','awarded','by the President of the United States') Evidence #2 Text: To be awarded the Congressional Space Medal of Honor , an astronaut must perform feats of extraordinary accomplishment while participating in space flight under the authority of NASA . Tuples: ('awarded', 'the Congressional Space Medal of Honor’) ('To be awarded the Congressional Space Medal of Honor',’an astronaut','perform','feats of extraordinary accomplishment’) ('an astronaut', 'participating','in space flight','under the authority of NASA' ) Figure 5: A case study of our approach. Facts shared across the claim and the evidence are highlighted with different colors. a text or a subject-predicate-object triple (Nakashole and Mitchell, 2014). In this work, we only consider textual claims. Existing datasets differ from data source and the type of supporting evidence for verifying the claim. An early work by Vlachos and Riedel (2014) constructs 221 labeled claims in the political domain from POLITIFACT.COM and CHANNEL4.COM, giving metadata of the speaker as the evidence. POLIFACT is further investigated by following works, including Ferreira and Vlachos (2016) who build Emergent with 300 labeled rumors and about 2.6K news articles, Wang (2017) who builds LIAR with 12.8K annotated short statements and six fine-grained labels, and Rashkin et al. (2017) who collect claims without meta-data while providing 74K news articles. We study FEVER (Thorne et al., 2018a), which requires aggregating information from multiple pieces of evidence from Wikipedia for making the conclusion. FEVER contains 185,445 annotated instances, which to the best of our knowledge is the largest benchmark dataset in this area. The majority of participating teams in the FEVER challenge (Thorne et al., 2018b) use the same pipeline consisting of three components, namely document selection, evidence sentence selection, and claim verification. In document selection phase, participants typically extract named entities from a claim as the query and use Wikipedia search API. In the evidence selection phase, participants measure the similarity between the claim and an evidence sentence candidate by training a classification model like Enhanced LSTM (Chen et al., 2016) in a supervised setting or using string similarity function like TFIDF without trainable parameters. Padia et al. (2018) utilizes semantic frames for evidence selection. In this work, our focus is the claim classification phase. Top-ranked three systems aggregate pieces of evidence through concatenating evidence sentences into a single string (Nie et al., 2019), classifying each evidence-claim pair separately, merging the results (Yoneda et al., 2018), and encoding each evidence-claim pair followed by pooling operation (Hanselowski et al., 2018). Zhou et al. (2019) are the first to use BERT to calculate claim-specific evidence sentence representations, and then develop a graph network to aggregate the information on top of BERT, regarding each evidence as a node in the graph. Our work differs from Zhou et al. (2019) in that (1) the construction of our graph requires understanding the syntax of each sentence, which could be viewed as a more fine-grained graph, and (2) both the contextual representation learning module and the reasoning module have model innovations of taking the graph information into consideration. Instead of training each component separately, Yin and Roth (2018) show that joint learning could improve both claim verification and evidence selection. 7 Conclusion In this work, we present a graph-based approach for fact checking. When assessing the veracity of a claim giving multiple evidence sentences, our approach is built upon an automatically constructed graph, which is derived based on semantic role labeling. To better exploit the graph information, we propose two graph-based modules, one for calculating contextual word embeddings using graph-based distance in XLNet, and the other for learning representations of graph components and reasoning over the graph. Experiments show that both graph-based modules bring improvements and our final system is the state-of-the-art on the public leaderboard by the time our paper is submitted. Evidence selection is an important component of fact checking as finding irrelevant evidence may lead to different predictions. A potential solution 6178 is to jointly learn evidence selection and claim verification model, which we leave as a future work. Acknowledgement Wanjun Zhong, Zenan Xu, Jiahai Wang and Jian Yin are supported by the National Natural Science Foundation of China (U1711262, U1611264,U1711261,U1811261,U1811264, U1911203), National Key R&D Program of China (2018YFB1004404), Guangdong Basic and Applied Basic Research Foundation (2019B1515130001), Key R&D Program of Guangdong Province (2018B010107005). The corresponding author is Jian Yin. References Gabor Angeli and Christopher D Manning. 2014. Naturalli: Natural logic inference for common sense reasoning. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 534–545. Michele Banko, Michael J Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Ijcai, volume 7, pages 2670–2676. Xavier Carreras and Llu´ıs M`arquez. 2004. Introduction to the conll-2004 shared task: Semantic role labeling. In Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004, pages 89–97. Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2016. Enhanced lstm for natural language inference. arXiv preprint arXiv:1609.06038. Ido Dagan, Dan Roth, Mark Sammons, and Fabio Massimo Zanzotto. 2013. Recognizing textual entailment: Models and applications. Synthesis Lectures on Human Language Technologies, 6(4):1–220. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Robert Faris, Hal Roberts, Bruce Etling, Nikki Bourassa, Ethan Zuckerman, and Yochai Benkler. 2017. Partisanship, propaganda, and disinformation: Online media and the 2016 us presidential election. William Ferreira and Andreas Vlachos. 2016. Emergent: a novel data-set for stance classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: Human language technologies, pages 1163–1168. Ben Goodrich, Vinay Rao, Peter J Liu, and Mohammad Saleh. 2019. Assessing the factual accuracy of generated text. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 166–175. ACM. Andreas Hanselowski, Hao Zhang, Zile Li, Daniil Sorokin, Benjamin Schiller, Claudia Schulz, and Iryna Gurevych. 2018. Ukp-athene: Multi-sentence textual entailment for claim verification. arXiv preprint arXiv:1809.01479. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019a. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Zhenghao Liu, Chenyan Xiong, and Maosong Sun. 2019b. Kernel graph attention network for fact verification. arXiv preprint arXiv:1910.09796. Ndapandula Nakashole and Tom M Mitchell. 2014. Language-aware truth assessment of fact candidates. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1009–1019. Yixin Nie, Haonan Chen, and Mohit Bansal. 2019. Combining fact extraction and verification with neural semantic matching networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6859–6866. Ankur Padia, Francis Ferraro, and Tim Finin. 2018. Team UMBC-FEVER : Claim verification using semantic lexical resources. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 161–165, Brussels, Belgium. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. 2017. Truth of varying shades: Analyzing language in fake news and political fact-checking. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2931–2937. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. arXiv preprint arXiv:1803.02155. Dinghan Shen, Xinyuan Zhang, Ricardo Henao, and Lawrence Carin. 2018. Improved semantic-aware network embedding with fine-grained word alignment. arXiv preprint arXiv:1808.09633. 6179 Peng Shi and Jimmy Lin. 2019. Simple bert models for relation extraction and semantic role labeling. arXiv preprint arXiv:1904.05255. Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-First AAAI Conference on Artificial Intelligence. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018a. Fever: a large-scale dataset for fact extraction and verification. arXiv preprint arXiv:1803.05355. James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, and Arpit Mittal. 2018b. The fact extraction and verification (fever) shared task. arXiv preprint arXiv:1811.10971. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903. Andreas Vlachos and Sebastian Riedel. 2014. Fact checking: Task definition and dataset construction. In Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, pages 18–22. Soroush Vosoughi, Deb Roy, and Sinan Aral. 2018. The spread of true and false news online. Science, 359(6380):1146–1151. William Yang Wang. 2017. ” liar, liar pants on fire”: A new benchmark dataset for fake news detection. arXiv preprint arXiv:1705.00648. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Wenpeng Yin and Dan Roth. 2018. Twowingos: A twowing optimization strategy for evidential claim verification. arXiv preprint arXiv:1808.03465. Takuma Yoneda, Jeff Mitchell, Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Ucl machine reading group: Four factor framework for fact finding (hexaf). In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 97–102. Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019. GEAR: Graph-based evidence aggregating and reasoning for fact verification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 892–901, Florence, Italy. Association for Computational Linguistics. A Typology Sort Algorithm Algorithm 1 Graph-based Distance Calculation Algorithm. Require: A sequence of nodes S = {si, s2, · · · , sn}; A set of relations R = {r1, r2, · · · , rm} 1: function DFS(node, visited, sorted sequence) 2: for each child sc in node’s children do 3: if sc has no incident edges and visited[sc]==0 then 4: visited[sc]=1 5: DFS(sc, visited) 6: end if 7: end for 8: sorted sequence.append(0, node) 9: end function 10: sorted sequence = [] 11: visited = [0 for i in range(n)] 12: S,R = changed to acyclic graph(S,R) 13: for each node si in S do 14: if si has no incident edges and visited[i] == 0 then 15: visited[i] = 1 16: for each child sc in si’s children do 17: DFS(sc, visited, sorted sequence) 18: end for 19: sorted sequence.append(0,si) 20: end if 21: end for 22: return sorted sequence B FEVER The statistic of FEVER is shown in Table 3. Split SUPPORTED REFUTED NEI Training 80,035 29,775 35,659 Dev 6,666 6,666 6,666 Test 6,666 6,666 6,666 Table 3: Split size of SUPPORTED, REFUTED and NOT ENOUGH INFO (NEI) classes in FEVER. FEVER score is calculated with equation 8, where y is the ground truth label, ˆy is the predicted label, E = [E1, · · · , Ek] is a set of ground-truth evidence, and ˆ E = [ ˆE1, · · · , ˆE5] is a set of predicted evidence. Instance Correct(y, ˆy, E, ˆ E) def = y = ˆy ∧(y = NEI ∨Evidence Correct(E, ˆ E)) (8) C Evidence Selection Results In this part, we present the performance of the sentence-level evidence selection module that we develop with different backbone. We take the concatenation of claim and each evidence as input, and take the last hidden vector to calculate the score for evidence ranking. In our experiments, we try both 6180 RoBERTa and XLNet. From Table 4, we can see that RoBERTa performs slightly better than XLNet here. When we submit our system on the leaderboard, we use RoBERTa as the evidence selection model. Model Dev. Set Test Set Acc. Rec. F1 Acc. Rec. F1 XLNet 26.60 87.33 40.79 25.55 85.34 39.33 RoBERTa 26.67 87.64 40.90 25.63 85.57 39.45 Table 4: Results of evidence selection models. D Training Details In this part, we describe the training details of our experiments. We employ cross-entropy loss as the loss function. We apply AdamW as the optimizer for model training. For evidence selection model, we set learning rate as 1e-5, batch size as 8 and maximum sequence length as 128. In claim verification model, the XLNet network and graph-based reasoning network are trained separately. We first train XLNet and then freeze the parameters of XLNet and train the graph-based reasoning network. We set learning rate as 2e-6, batch size as 6 and set maximum sequence length as 256. We set the dimension of node representation as 100.
2020
549
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 593–599 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 593 Evaluating Dialogue Generation Systems via Response Selection Shiki Sato1 Reina Akama1,2 Hiroki Ouchi2,1 Jun Suzuki1,2 Kentaro Inui1,2 1Tohoku University 2RIKEN {shiki.sato,reina.a,jun.suzuki,inui}@ecei.tohoku.ac.jp [email protected] Abstract Existing automatic evaluation metrics for open-domain dialogue response generation systems correlate poorly with human evaluation. We focus on evaluating response generation systems via response selection. To evaluate systems properly via response selection, we propose a method to construct response selection test sets with well-chosen false candidates. Specifically, we propose to construct test sets filtering out some types of false candidates: (i) those unrelated to the ground-truth response and (ii) those acceptable as appropriate responses. Through experiments, we demonstrate that evaluating systems via response selection with the test set developed by our method correlates more strongly with human evaluation, compared with widely used automatic evaluation metrics such as BLEU. 1 Introduction Automatic evaluation for open-domain dialogue generation systems has a potential for driving their research and development because of its high reproducibility and low cost. However, existing automatic evaluation metrics, such as BLEU (Papineni et al., 2002), correlate poorly with human evaluation (Liu et al., 2016). This poor correlation arises from a nature of dialogue, that is, there are many acceptable responses to an input context, known as the one-to-many problem (Zhao et al., 2017). To tackle this problematic issue, we focus on evaluating response generation systems via response selection. In this task, systems select an appropriate response for a given context from a set of response candidates. Each candidate has the label that indicates whether the candidate is appropriate response for the given context. Traditionally, response selection has been used to evaluate retrieval-based dialogue systems (Lowe et al., 2015; Wu et al., 2017). We consider applying this task to driving the research for dialogue generation Repository Context: Do you have a car? Ground-Truth: Yes, I have a car. Query No, I have a car. I don’t know. I have a cold. No, I have a car. I don’t know. I have a cold. 2 1 4 Question Retrieve utterances Give scores by human evaluation Remove high-score utterances No, I have a car. I don’t know. I have a cold. 2 1 4 False candidates Figure 1: Overview of the construction method of our test set. First, we retrieve only utterances related to the ground-truth response from a repository. Then, we remove acceptable utterances by human evaluation. systems. Specifically, we consider using response selection to pick out promising systems that should be evaluated more precisely by humans among a lot of candidate systems. We assume that response selection is a valid option for such a preliminary evaluation on the basis of the following assumption: systems that can generate appropriate responses can also select appropriate responses. One advantage of evaluating generation systems via response selection is that it can remedy the one-to-many problem, because we do not have to consider the appropriate responses that are not included in sets of response candidates. Another advantage is that it enables a simple and clear comparison between systems in accuracy. Generally, false response candidates are randomly sampled from a repository (Lowe et al., 2015; Gunasekara et al., 2019), which causes two problems: (i) unrelated false candidates and (ii) acceptable utterances as false. The first problem is that randomly sampled false candidates are often too far from ground-truth responses. Consider the case where for a given context “Do you have a car?”, a response candidate “I play tennis.” is ran594 domly sampled. Systems can easily recognize this candidate as a false one because there are no related content words between them. Such excessive easiness is not preferable because the performance gap between good and inferior systems tends to be small. The second problem is that there is no guarantee that randomly sampled candidates are always unacceptable ones. For example, “I don’t know.” is often sampled as a false response because this phrase often occurs in open-domain dialogues. This phrase can be regarded as acceptable for various contexts. These two problems make general response selection test sets unreliable. In this work, we propose a method to construct response selection test sets with well-chosen false candidates (Figure 1). First, we retrieve only utterances related to the ground-truth response. Then we remove acceptable utterances by human evaluation. Through experiments, we demonstrate that automatic evaluation using the test set developed by our method correlates more strongly with human evaluation, compared with widely used automatic evaluation metrics such as BLEU. Our empirical results indicate that response selection with wellchosen false candidates can be a valid option for evaluating response generation systems. We will release the test set used in the experiments.1 2 Related Work Automatic evaluation metrics Various metrics have been proposed for automatic evaluation of dialogue systems, such as BLEU, METEOR (Banerjee and Lavie, 2005), ROUGE (Lin, 2004), Greedy Matching (Rus and Lintean, 2012), and Vector Extrema (Forgues et al., 2014). These metrics evaluate the quality of the responses generated by systems. However, this is challenging due to the oneto-many problem. For example, ADEM, a metric proposed by (Lowe et al., 2017), is easily fooled by adversarial examples (responses) (Sai et al., 2019). To remedy one-to-many problem, we focus on evaluating systems via response selection. Response selection test sets with human labels One popular test set for response selection is Douban Conversation Corpus in Chinese (Wu et al., 2017). In this test set, each response candidate has a manually annotated label that indicates whether or not the candidate is appropriate for the given context. Although this test set is similar to ours, 1The test set is available at https://github.com/ cl-tohoku/eval-via-selection. there are some differences between the purposes and procedure of test set designs. The purpose of creating their test set is to simulate and evaluate retrieval-based dialogue systems. Thus, all the candidates in this corpus are retrieved by using the context as queries, as retrieval-based systems do. In this paper, we develop an English response selection test set with human labels to evaluate dialogue generation systems. One of the salient differences from Douban Conversation Corpus is the procedure of retrieving false candidates. We retrieve false candidates using the ground-truth responses. By this method, we can more certainly collect false candidates that are related to ground-truth responses and facilitate error analysis as described in Section 4.3. 3 Test Set Construction 3.1 Construction Method For each context c and ground-truth response rtrue, we construct a set of false response candidates rfalse ∈Rfalse by retrieving utterances from an utterance repository u ∈U. As we mentioned in Section 1, we want to filter out some types of utterance: (i) those unrelated to the ground-truth response and (ii) those acceptable as appropriate responses. We filter out such utterances as follows: 1. Retrieve M utterances, {u1, · · · , uM}, related to the ground-truth response rtrue from the utterance repository U. 2. Remove acceptable ones from the retrieved utterances by human evaluation. 1. Retrieve utterances related to the groundtruth response We assume that utterances related to the ground-truth response share some similar content words between them. Here, we retrieve the related utterances on the basis of the similarities of the content words. This process makes it difficult for systems to distinguish between groundtruth and false candidates only by comparing the content words. 2. Remove acceptable utterances Coincidentally, some of the retrieved utterances may be acceptable as an appropriate response. To remove such utterances, we ask human annotators to evaluate each retrieved utterance. Specifically, we instruct five annotators (per candidate) to score each retrieved candidate in a five-point scale from 1 to 5. A score of 5 means that the utterance can clearly be regarded as an appropriate response for the given 595 context, whereas a score of 1 means that it cannot be regarded as an appropriate one at all. In addition to the scores, we also instruct annotators to give a score of 0 to ungrammatical utterances. We remove the utterances that are given a score of 3 or higher by three or more annotators because these utterances with a high score can be acceptable. In addition, we remove the utterances that are given a score of 0 by three or more annotators because these are likely to be ungrammatical ones. We also instruct annotators to score ground-truth responses, combining them with retrieved utterances. We remove the questions if the score of the ground-truth response is low, i.e., three or more annotators give a score of 3 or lower. This is intended to ensure that ground-truth responses are certainly appropriate for the given context. 3.2 Overview of Constructed Test Set Settings of test set construction We retrieve 10 utterances (per question) from the repository and remove acceptable ones following the method described in Section 3.1. We use crowdsourcing2 to score the retrieved utterances. After removing acceptable utterances, there are some questions that have 6 or more available false candidates. From these questions, we develop new questions with the same context but different candidates (both groundtruth responses and false candidates). We regard one of acceptable utterances removed by human evaluation as the ground-truth responses of new questions. We use the dialogue data from DailyDialog (Li et al., 2017) to construct the test set. We extract the four beginning turns of each dialogue sample from DailyDialog, regarding the fourth utterance as the ground-truth response. We extract the utterances of OpenSubtitles2018 (Lison et al., 2018) to construct the repository used to retrieve false candidates. Note that the repository does not contain the utterances in the dialogue data used to train response generation systems in Section 4.1. Statistics of our test set We developed the test set that consists of 1, 019 questions with 4 candidates (1 ground-truth + 3 false candidates). Table 1 shows the basic statistics of our test set. The Fleiss’ Kappa (Fleiss, 1971) of the annotators’ scoring in the six scale is 0.22.3 Note that if we 2https://www.mturk.com/ 3We calculated Fleiss’ Kappa based on the scale of the scores as categorical. Total questions 1,019 Candidates per question 4 Context turns per question 3 Kappa of the scoring (six classes) 0.22 Kappa of the scoring (two classes) 0.63 Table 1: Basic statistics of our test set Context: A: Excuse me. Could you please take a picture of us with this camera? B: Sure. Which button do I press to shoot? A: This one. Candidates: 1. Could he not focus on that? 2. But I do have ninja focus. 3. Do not lose your focus! 4. Do I have to focus it? [Ground-truth] Table 2: Example of our test set. All three false candidates contain the content word “focus”, which is related to the context (topic). regard the scoring as binary classification (scores higher than 3 are regarded as appropriate responses, and the others not), the Fleiss’ Kappa of the scoring is 0.63, which is higher than Douban Conversation Corpus (0.41). Example of our test set Table 2 shows an example of our test set. All the false response candidates share the same content word “focus” related to the topic “camera”. Preliminary experiments We conducted a simple experiment to investigate whether or not a system that takes only content words into account can recognize false response candidates in our test set. For the model, we used the TF-IDF model (Lowe et al., 2015), which simply compares between content words of a given context and each candidate. As a result, the accuracy was 0.461. For a comparison, we also replaced all the false candidates in our test set with randomly sampled utterances. The accuracy of the same TF-IDF model increased to 0.671. These results indicates that it is difficult to recognize false candidates in our test set only by comparing content words. 4 Experiments We test whether the automatic evaluation of response generation systems on our test set correlates with human evaluation. 596 4.1 Experimental Procedure We train multiple response generation systems and rank them on the basis of human and automatic evaluation scores. By comparing between the system ranking by human scores and the ranking by each automatic score, we verify the correlations. 4.1.1 Response Generation Models We train 10 different response generation systems to be ranked in the experiments. Their architectures are ones of Seq2Seq with GRU (Cho et al., 2014), Seq2Seq with LSTM (Hochreiter and Schmidhuber, 1997), or Transformer (Vaswani et al., 2017). Some systems have same architecture, but different hyper-parameters.4 We train the models on OpenSubtitles2018. The training data consists of 5M samples and the validation data consists of 0.05M samples, each of which is four-turns dialogue. 4.1.2 Evaluation Procedure Ground-truth system ranking by human scores The trained systems generate a response rgen for each input context c ∈C. Then, five human annotators (per response) score each generated response rgen in a five-point scale from 1 to 5. A score of 5 means that the response can clearly be regarded as an appropriate response for the given context, whereas a score of 1 means that it cannot be regarded as an appropriate one at all. As a result, we obtain five scores, {s1, s2, · · · , s5}, for each response rgen and average them: smean = mean(s1, s2, · · · , s5). We also average smean across all the questions in the test set and yield the final score sfinal for each system. Based on this score, we make a ranking of the systems and regard it as the ground-truth ranking. Although we developed the test set that consists of 1,019 questions, it is too costly to evaluate all the 10 systems’ responses for 1,019 questions by humans. Thus we give the context of 56 randomly sampled questions from our test set to the 10 systems as inputs C. System ranking by response selection accuracy We rank the systems by response selection accuracy with well-chosen false candidates (CHOSEN). The trained response generation systems compute the softmax cross-entropy loss ℓr for each response candidate r ∈R. We regard the candidate with the lowest loss as the system’s selection: 4We describe the model settings in Appendix B. Metrics Spearman p-value BLEU-1 −0.36 0.30 BLEU-2 0.085 0.82 METEOR 0.073 0.84 ROUGE-L 0.35 0.33 RANDOM 0.43 CHOSEN 0.48 0.19 HUMAN 0.87 0.0038 Table 3: Correlations between the ground-truth system ranking and the rankings by automatic evaluation. ˆr = argmin r∈R ℓr. From the predictions, we calculate accuracy and make a ranking of the systems based on the accuracy. For comparison, we also make a ranking by response selection accuracy with randomly sampled false candidates (RANDOM).5 We compute the accuracy of CHOSEN and RANDOM using all 1, 019 questions from our test set. System ranking by other evaluation metrics For comparison, we also make rankings of the systems by three existing automatic evaluation metrics: BLEU, METEOR, and ROUGE-L. First, the trained systems generate a response for each input context. Then we compute the scores comparing generated responses and the ground-truth responses. These scores can be computed automatically without false candidates. Thus we compute them using all 7, 393 available four-turns dialogue samples from DailyDialog, regarding the fourth utterances as the ground-truth responses. 4.2 Results We compare the rankings by Spearman’s rank correlation coefficients, shown in Table 3. First, we yielded the human upper bound. we evaluated the correlation between the rankings made by different annotators (HUMAN). We randomly divided human evaluation into two groups and made two rankings. The correlation coefficient between the two rankings was 0.87. Second, we found that the rankings made using existing automatic evaluation metrics correlate poorly with ground-truth ranking. BLEU, often used to evaluate generation systems, does not correlate with human evaluation at all. One exception is ROUGE-L. However, 5We compute the coefficient of RANDOM by averaging the coefficients of different 100 trials. 597 Figure 2: Box plot of Spearman’s rank correlation coefficients between the ground-truth ranking and the rankings by RANDOM. A dot in blue indicates the correlation coefficient of CHOSEN. Context: A: Peter, enough with your computer games. Go do your homework now. B: Can’t I play more? A: No! Stop playing computer games! Candidates: Ground-Truth: Mom, I’ll be finished soon. RANDOM: Thats the problem with small towns. CHOSEN: You are to be finished very soon. Table 4: Examples of a randomly sampled and wellchosen candidates. its correlation coefficient is lower than 0.4, which means reasonable correlation. Third, we found that the ranking made by using our test set reasonably correlates with the ground-truth ranking compared with other metrics, and the correlation coefficient (CHOSEN) is higher than 0.4. 4.3 Discussion Instability of evaluation with random sampling The correlation coefficient of the ranking by response selection with randomly sampled false candidates (RANDOM) is higher than that of BLEU and slightly lower than that of CHOSEN. However, a serious problem has been observed: the instability. We make 100 test sets, each of which consists of different false candidates by random sampling with different seeds. For each test set, we make a system ranking and compute its coefficient. Figure 2 shows the box plot of the Spearman’s rank correlation coefficients of the trials. The range of the coefficients is very wide (0.06-0.67). This result means that the quality of evaluation with randomly sampled false candidates strongly depends on the sampled candidates, which is the uncontrollable factor stemming from the randomness. Interpretable error analysis Our automatic evaluation with well-chosen false candidates brings another benefit: the interpretable error analysis. Table 4 shows an example of a question of our test set. The well-chosen false candidate (CHOSEN) is similar to the ground-truth response. However, the grammatical subject of the CHOSEN sentence is “You”, which completely mismatches the context. Thus if systems select this false candidate, they may lack the ability to determine correctly the subject of sentences. In this way, our test set enables us to analyze systems’ predictions from various meaningful perspectives. As a case study, we design a set of error labels, each of which indicates why the false candidate is false, and assign them to 50 false candidates in our test set. We succeed in assigning the labels to 22 out of 50 candidates.6 Limitation Our test set is designed to evaluate open-domain dialogue generation systems. Thus, it is not suitable for evaluating other types of dialogue system such as task-oriented ones. By contrast, existing automatic evaluation metrics, such as BLEU, do not have this type of restriction. 5 Conclusion In this paper, we focused on evaluating response generation systems via response selection. To evaluate systems properly via response selection, we proposed a method to construct response selection test sets with well-chosen false candidates. Specifically, we proposed to construct test sets filtering out some types of false candidates: (i) those unrelated to the ground-truth response and (ii) those acceptable as appropriate responses. We demonstrated that evaluating systems via response selection with the test sets developed by our method correlates more strongly with human evaluation, compared with that of widely used metrics such as BLEU. In the future, we will provide labels that indicate “Why this candidate is false” for false candidates in our test set, so that one can easily detect weak points of systems through error analysis. Acknowledgments This work was partially supported by JSPS KAKENHI Grant Number JP19H04162. We would like to thank the laboratory members who gave us advice and all reviewers of this work for their insightful comments. 6We show some of the examples in Appendix C. 598 References Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In Proceedings of the 5th International Conference on Learning Representations (ICLR). Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72. Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734. Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5):378–382. Gabriel Forgues, Joelle Pineau, Jean-Marie Larchevˆeque, and R´eal Tremblay. 2014. Bootstrapping dialog systems with word embeddings. In NeurIPS modern machine learning and natural language processing workshop. Chulaka Gunasekara, Jonathan K. Kummerfeld, Lazaros Polymenakos, and Walter Lasecki. 2019. DSTC7 task 1: Noetic end-to-end response selection. In Proceedings of the First Workshop on NLP for Conversational AI, pages 60–67. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (IJCNLP), pages 986– 995. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81. Pierre Lison, J¨org Tiedemann, and Milen Kouylekov. 2018. OpenSubtitles2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC), page 1742–1748. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2122–2132. Ryan Lowe, Michael Noseworthy, Iulian Vlad Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic Turing test: Learning to evaluate dialogue responses. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1116–1126. Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 285–294. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311–318. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2227–2237. Vasile Rus and Mihai Lintean. 2012. A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 157– 162. Ananya Sai, Mithun Das Gupta, Mitesh M. Khapra, and Mukundhan Srinivasan. 2019. Re-evaluating adem: A deeper look at scoring dialogue responses. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, page 6220–6227. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30 (NeurIPS), pages 5998–6008. Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 496–505. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 654–664. 599 A Methods to Retrieve False Candidates To make false candidates in each pool diverse, we use two retrieval methods: lexical retrieval and embedding-based retrieval. We use Lucene7 for lexical retrieval, and cosine similarity of sentence vectors for embedding-based retrieval. Sentence vectors are SIF (Arora et al., 2017) weighted average of ELMo word vectors (Peters et al., 2018). B Detailed Model Settings in the Experiments We trained 10 different response generation systems to be ranked in the experiments. We trained them with different architectures or settings. The common settings for the model training are shown in Table 5 and the hyper-parameters of each the models are shown in Table 6. Vocab size 16,000 Batch size 6,000 tokens Loss cross entropy Learning rate 1e-4 (fixed) Optimizer Adam Table 5: Common settings for the model training in the experiments. No. Architecture Enc/Dec layers Enc/Dec embed dim Enc/Dec hidden dim 1 GRU 1 / 1 256 / 256 256 / 256 2 GRU 1 / 1 512 / 512 512 / 512 3 GRU 2 / 2 256 / 256 256 / 256 4 GRU 2 / 2 512 / 512 512 / 512 5 LSTM 1 / 1 256 / 256 256 / 256 6 LSTM 1 / 1 512 / 512 512 / 512 7 LSTM 2 / 2 512 / 512 512 / 512 No. Architecture Enc/Dec layers Enc/Dec embed dim Enc/Dec attention heads 8 Transformer 2 / 2 256 / 256 4 / 4 9 Transformer 2 / 2 512 / 512 4 / 4 10 Transformer 4 / 4 256 / 256 4 / 4 Table 6: Hyper-parameters of each model in the experiments. C Labels for False Candidates As a case study, we designed a set of error labels, each of which indicates why the false candidate is false. To confirm whether we can assign the labels to the false candidates collected by our test set construction method, We assigned the labels 7https://lucene.apache.org/ to 50 false candidates from our test set. We could eventually assign the labels to 22 candidates. The types of our error labels and the breakdown are listed in Table 7. The examples of false candidates (CHOSEN) corresponded to the error labels are shown in Table 4 (for labeled “Responses that have wrong subjects”), Table 8, Table 9, and Table 10. Error label Count Inconsistent responses with the context 8 Responses that have insufficient information 4 Responses that have wrong subjects 9 Responses with wrong tense 1 Table 7: Error labels and the breakdown of the the assigned labels. Context: A: 911 emergency. What is the problem? B: I would like to report a break-in. A: When was this break-in? Candidates: Ground-Truth: I believe it happened last night. CHOSEN: I thought that would happen last night. Table 8: Example of a false candidate labeled “Inconsistent responses with the context.” Context: A: What’s the matter with you, Paul? B: I’m not feeling well. I think I’m having a cold. A: Looks like it. You need to drink a lot of water and take a good rest. Candidates: Ground-Truth: Yeah, I will. CHOSEN: Yeah, yeah, yeah, I... Table 9: Example of a false candidate labeled “Responses that have insufficient information.” Context: A: Hi, charlie, are you busy this evening? B: Sorry, I’m afraid that I’ve got plans tonight. A: What are you doing? Candidates: Ground-Truth: I’m going to my parents’house for my father’s birthday. CHOSEN: We were at my sister’s house for my nephew’s birthday by 2 p.m. Table 10: Example of a false candidate labeled “Responses with wrong tense.”
2020
55
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6181–6190 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6181 Automatic Generation of Citation Texts in Scholarly Papers: A Pilot Study Xinyu Xing, Xiaosheng Fan, Xiaojun Wan Wangxuan Institute of Computer Technology, Peking University The MOE Key Laboratory of Computational Linguistics, Peking University {xingxinyu,fanxiaosheng,wanxiaojun}@pku.edu.cn Abstract In this paper, we study the challenging problem of automatic generation of citation texts in scholarly papers. Given the context of a citing paper A and a cited paper B, the task aims to generate a short text to describe B in the given context of A. One big challenge for addressing this task is the lack of training data. Usually, explicit citation texts are easy to extract, but it is not easy to extract implicit citation texts from scholarly papers. We thus first train an implicit citation text extraction model based on BERT and leverage the model to construct a large training dataset for the citation text generation task. Then we propose and train a multi-source pointer-generator network with cross attention mechanism for citation text generation. Empirical evaluation results on a manually labeled test dataset verify the efficacy of our model. This pilot study confirms the feasibility of automatically generating citation texts in scholarly papers and the technique has the great potential to help researchers prepare their scientific papers. 1 Introduction A scientific paper usually needs to cite a lot of reference papers and introduce each reference paper with some text. In this study, the text describing a reference paper is called citation text. A researcher usually needs to find relevant papers he wants to cite and write some text to introduce them when writing a scientific paper. However, the process of writing citation texts is tedious and time-consuming. In order to reduce the burden of researchers, we propose and try to address the task of automatic citation text generation. Automatic generation of citation texts in scholarly papers is a challenging and meaningful task, however, there are very few studies investigating this problem. Given a cited paper B and the context in a citing paper A (i.e., the sentences before and after a specific position in paper A), the task aims to generate a short text to describe B with respect to the given context in A. The task is like the task of scholarly paper summarization (Luhn, 1958; Edmundson, 1969; Qazvinian and Radev, 2008; Mei and Zhai, 2008). Both of the two tasks aim to produce a text to describe the cited paper B. The major difference between the two tasks is that the citation texts reflect not only the salient content of B, but also the context of A. Different citing papers usually have different descriptions of the same cited paper. Sometimes one paper may cite another paper several times in different positions but give different descriptions because the specific contexts are different. Another difference between the two tasks is the length of the text. A citation text is usually much shorter than a paper summary. Generally, citation text generation can be considered as a task of generating a very short summary of paper B given the context of paper A. The difficulty lies in that given different A or different contexts of A, the task aims to produce different citation texts for the same B. Most commonly, the citation text is a single sentence, but sometimes it may consist of several sentences (Jebari et al., 2018; Qazvinian and Radev, 2010; Sondhi and Zhai, 2014). Like (Small, 2011), we define citation text as a block of text composed of one or more consecutive sentences surrounding the reference sign. Each citation sentence can be classified as explicit or implicit (Qazvinian and Radev, 2010; Athar and Teufel, 2012; Yasunaga et al., 2019). Explicit citation is a citation sentence that contains explicit reference to the cited paper. An implicit (or non-explicit) citation sentence appears around the explicit citation sentence and it does not attach any explicit reference to the cited paper but supplies additional information about the cited paper. The citation text generation task in this study aims to generate both explicit and implicit 6182 citation sentences. We build a citation text generation dataset based on the ACL Anthology Network corpus (AAN) (Radev et al., 2013). We first perform human annotation and get 1,000 citation texts (including explicit and implicit citation sentences). We randomly select 400 citation texts as test set, and use the other 600 citation texts to first train a citation text extraction model and then use the extraction model to automatically extract many more citation texts to build a large-scale training dataset. With the training dataset we construct, we can train our citation generation model. In this paper, we use pointer-generator network (See et al., 2017) as the baseline model. We believe that the key to dealing with citation text generation problem is modelling the relationship between the context of citing paper A and the content of cited paper B. So we encode the context of paper A and the abstract of paper B separately, and add cross attention mechanism by making context and abstract attend to each other. We call our model multi-source pointergenerator network with cross attention mechanism. The evaluation results show that our model outperforms the baseline models. Our contributions are summarized as follows: • We propose a new task of automatic citation text generation in scholarly papers. • We annotate 1,000 citation texts and train a citation extraction model to automatically construct a large training dataset for the citation text generation task. The data are available at https://github.com/XingXinyu96/ citation_generation. • We propose the multi-source pointergenerator network with cross attention mechanism to address this challenging task. Evaluation results demonstrate the efficacy of our proposed model. 2 Related Work Firstly, we introduce some studies on citation extraction. Kaplan et al. (2009) proposed a method based on coreference-chains for citation extraction. Sondhi and Zhai (2014) first independently trained a separate HMM for each citation in the article and then performed a constrained joint inference to label non-explicit citing sentences. Qazvinian and Radev (2010) proposed a framework based on probabilistic inference to extract implicit citations. Jebari et al. (2018) proposed an unsupervised approach which is based on topic modeling and word embedding for implicit citation extraction. Jebari et al. (2018) introduced method based on neural network but it did not give out convincing evaluation results. A few studies have investigated the task of summarizing single scholarly paper, i.e., single document summarization in the scientific domain, which is relevant to the citation text generation task. Early works include (Luhn, 1958; Baxendale, 1958; Edmundson, 1969), and they tried to use various features specific to scientific articles for summary extraction. Later on, citation information has shown its usefulness for scientific paper summarization (Qazvinian and Radev, 2008; Mei and Zhai, 2008; Qazvinian and Radev, 2010; Cohan and Goharian, 2018; Yasunaga et al., 2019). Several benchmark tests have been set up for scientific summarization, including TAC 2014 Biomedical Summarization track and the CL-SciSumm Shared Task (Jaidka et al., 2016). A few other studies have investigated the task of summarizing multiple scholarly papers, i.e., multi-document summarization in the scientific domain (Mohammad et al., 2009; Yeloglu et al., 2011; Chen and Zhuge, 2014). Related work generation is a special case of multi-document scientific summarization (Hoang and Kan, 2010; Hu and Wan, 2014; Chen and Zhuge, 2019). However, the above related work about scholarly paper summarization is different from the task of citation text generation, which aims to generate a usually very short text to describe the cited paper in the given context of the citing paper. 3 Problem and Corpus Formally, given a citing paper A, a cited paper B and the context C in A, the task aims to generate the citation text T to describe B. The context C refers to the sentences surrounding the target citation text in A and it is provided to distinguish different mentions of B in different positions of A. The following example shows a paragraph of (Lu et al., 2008) and this article cites paper (Wong and Mooney, 2006). In this example, A refers to (Lu et al., 2008) and B refers to (Wong and Mooney, 2006). The sentence underlined (i.e., the second sentence) is an explicit citation, and the sentence in italics (i.e., the third sentence) is an implicit citationand both of them compose the citation text. The remaining two 6183 sentences (i.e., the first and last sentences) compose the context C of A. The phrase in bold which indicates the explicit citation to paper B is called reference sign. And the explicit citation text can be defined as the sentence with a reference sign to the cited paper. The implicit citation text can be defined as the sentences that provide information about the cited paper but do not have any reference sign. ...SILT (Kate et al., 2005) learns deterministic rules to transform either sentences or their syntactic parse trees to meaning structures. WASP (Wong and Mooney, 2006) is a system motivated by statistical machine translation techniques. It acquires a set of synchronous lexical entries by running the IBM alignment model and learns a log-linear model to weight parses. KRISP (Kate and Mooney, 2006) is a discriminative approach ... In this study, we build a citation generation dataset based on the ACL Anthology Network corpus (AAN) (Radev et al., 2013). The ACL anthology is a collection of papers from the Computational Linguistics journal, and proceedings from ACL conferences and workshops. In particular, we download and use the 2014 version of the AAN corpus which includes almost 23594 papers. After removing papers containing many garbled characters and papers without abstracts, there remains 16675 papers. The metadata of each paper and the paper citation network have been extracted and stored. We find all the mentions of each reference paper in a citing paper by using manually designed regular expressions to match the corresponding reference signs. Lastly, we extract 86052 explicit citations for further use. 3.1 Annotation Process For each reference sign, we perform human annotation to get all citation sentences. We label a vector in which each dimension corresponds to a sentence. A sentence is marked with C if it is an explicit citation, and with 1 if it is an implicit citation. All other sentences are marked with 0. The label vector of the example we mentioned before is [0,C,1,0]. Our annotation process has two steps. First, we annotate the explicit citation sentences. Despite we have extracted explicit citations with rules, we cannot assure that the extraction is completely correct. In order to accurately evaluate the performance of our methods, the explicit citations in the test dataset should be human annotated. We randomly choose some automatically extracted explicit citations and highlight the reference signs we find. The annotators only need to judge if they think the extraction of reference sign is correct. We stop this step when we get 1,000 explicit citations which are ensured correct by human. The second step is to annotate implicit citation texts. For each explicit citation sentence, we take three sentences before it and three sentences after it as candidate sentences1. Note that all the candidate sentences must be in the same section as the explicit citation sentence. We provide candidate sentences, explicit citation sentence, abstract of citing paper and cited paper for every annotator. Explicit citation sentence has already been labelled with C, and the annotators just need to label other sentences with 1 or 0. Note that we require the citation sentences to be continuous, which means there cannot be non-citation sentences between two citation sentences. To make the data more reliable, we make sure that every annotation instance must be annotated by three different people. When they disagree with each other, we take the label chosen by majority. After the annotation process, we get 1,000 annotated citation texts (including both explicit and implicit citation sentences) for further use. We randomly choose 400 citation texts as the final test dataset and the remaining citation texts are used for training. 4 Implicit Citation Text Extraction Model After the annotation process, we have 400 citation texts as test dataset and 600 citation texts for training. However, we need large-scale training data to train a feasible citation text generation model. So we decide to use the 600 human annotated citation texts to train an implicit citation text extraction model to expand our training dataset. We treat implicit citation text extraction as a sequence labeling problem and use BERT (Devlin et al., 2018) to deal with this problem. We add a classification layer on the final hidden representation of BERT and fine-tune the whole model on our dataset. We concatenate all the candidate sentences, the explicit citation sentence and the abstract of the cited paper as the input of BERT. We add a special tag ’[s]’ at the beginning of all sentences, a special tag ’[explicit]’ at the beginning of the explicit citation sentence and a special tag ’[abs]’ at the be1For simplicity, we do not consider the sentences with a long distance to the explicit citation. 6184 Precision Recall F-value Acc α=0.9 73.68 55.55 62.95 92.53 α=0.1 64.23 62.02 62.94 91.67 Table 1: Average test results for 10 fold crossvalidation Precision Recall F-value Acc α=0.9 72.16 53.21 61.06 91.43 α=0.1 64.79 60.60 62.50 90.80 Table 2: Average test results on external test data ginning of the cited paper’s abstract. The abstract of cited paper does not need to be labelled but it can provide a lot of information to help label the candidate sentences. BERT gives out the probability of every sentence to be implicit citation. We set a threshold α to control the identification of implicit citation sentence. When the probability given out by BERT is greater than α, we take the corresponding sentence as an implicit citation sentence. It is obvious that the smaller α is, the more sentences will be recognized as implicit citation sentence. To ensure the citation text being continuous, we start to identify implicit citation sentences from the explicit citation sentence to both sides and stop when meeting the first non-citation sentence. We do 10 fold cross-validation on our training dataset and use the 400 test data as external test data. The 600 training data are split into 10 subsets. When training, we use 9 subsets for training and use the remaining one subset as test set. The average results for cross-validation are shown in Table 1. The average results on external test data are shown in Table 2. Our model is compared with these baseline models: All one: It labels all candidate sentences with 1. Random: It labels all candidate sentences randomly. Cosine sim: It first uses bag of words model to represent all texts as vectors. Then it calculates the cosine similarity between candidate sentence and cited paper’s abstract, and the cosine similarity between candidate sentence and the explicit citation sentence. When the two similarities are both greater than the threshold, the sentence is labelled with 1. W2v sim: This model is also based on similarity. The similarity in this model is calculated based on word2vec model. With two sequence of words, it first gets the corresponding two sequences of Precision Recall F-value Acc All one 12.67 100.00 22.49 12.67 Random 12.31 49.40 19.71 49.01 Cosine sim 16.87 54.62 25.78 60.15 W2v sim 19.43 54.62 28.66 65.55 SVM 34.39 26.10 29.68 84.33 Table 3: Test results on external test data for the baseline models Precision Recall F-value Acc α=0.9 73.66 60.64 66.52 92.26 α=0.1 66.02 68.67 67.32 91.55 Table 4: Test results on external test data when using full training data vectors {ui} and {vj} with word2vec model. Then it uses the two sequences of vectors to calculate a similarity matrix M. The element of the matrix Mi,j = cos(ui, vj). Finally it keeps the max value of every row vector and takes the average value of the max value list as the final similarity. SVM: It trains an SVM to classify if a sentence is implicit citation sentence. The features include sentence position feature, special pattern feature, similarity feature, etc. Results of all these baseline models are shown in Table 3. As shown in these tables, our extraction model outperforms all the baseline models. The F-value of our extraction models with α=0.1 and α=0.9 are very close. This indicates that they have close performance. The precision of extraction model with α=0.9 is higher, while the recall of extraction model with α=0.1 is higher. So we can get two different extraction models with two different α. And with the two different extraction models, we can construct two different datasets for further training citation generation model. To get the two different datasets, we use all 600 data to train two final extraction models. We call the extraction model with α=0.1 EXTα=0.1 and call the extraction model with α=0.9 EXTα=0.9. The results on external test data when using full training data are shown in Table 4. 5 Final Evaluation Datasets With the two implicit citation extraction models we trained in the previous section, we construct three datasets for experiments. In each dataset, a data example is a triple: [citing paper’s context, cited paper’s abstract, gold citation text]. The first dataset is an explicit citation text generation dataset 6185 (Explicit dataset). The gold citation text in the training data and test data is single explicit citation sentence. Note that the explicit citation sentences in the training data are automatically extracted with rules and the explicit citation sentences of test data are human annotated. The second dataset is a full citation text generation dataset. The gold full citation texts of test data are human annotated. The gold full citation text of training data is constructed as follows: the gold explicit citation text is extracted with rules and the gold implicit citation text is extracted with EXTα=0.1. This extraction model gets higher recall, so we call this dataset high-recall full citation text generation dataset (HR dataset). The third dataset is also a full citation text generation dataset, and it is constructed in the same way with the second dataset except that the gold implicit citation text of training data is extracted with EXTα=0.9 and we call it high-precision full citation text generation dataset (HP dataset). The cited paper’s abstract in all the three datasets refers to the abstract of the cited paper B. We use it to represent the content of paper B because the whole article is too long to encode. The citing paper’s context in all the three datasets refer to the sentences around the gold citation text in citing paper A. we take three sentences before the gold citation text and three sentences after it as the context. Note that all the context sentences must be in the same section as the gold citation text. Finally, we have three datasets for experiments: • Explicit dataset: This dataset is built for explicit citation text generation. The test set contains 400 examples with human-annotated explicit citation texts and the training set contains 600 examples with human-annotated explicit citation texts and 85,052 examples with explicit citation texts extracted based on rules. The average lengths of explicit citation texts in the training and test sets are 29.64 words and 27.14 words, respectively. • HR dataset: This dataset is built for full citation text generation. The test set contains 400 examples with human-annotated full citation texts and the training set contains 600 examples with human-annotated full citation texts and 85,052 examples with automatically extracted full citation texts (particularly using EXTα=0.1 to extract implicit citation sentences). The average lengths of full citation texts in the training and test sets are 43.50 words and 42.75 words, respectively. • HP dataset: This dataset is similar to HR dataset, and EXTα=0.9 is used to automatically extract implicit citation sentences in the training dataset. The average lengths of full citation texts in the training and test sets are 39.77 words and 42.75 words, respectively. 6 Citation Generation Model Our citation text generation model is a multisource pointer-generator network with cross attention mechanism. Because the citation generation task has two input sequences, we use two encoders to encode them separately and allow the model to copy words from both input sequences. Such a multi-source pointer-generator network does not have the ability to model the relationship between two input sequences, so we add a cross attention mechanism on them. The cross attention mechanism calculates the attention distribution of every word to the other sequence of words. These attention distributions are used to help the decoder. We believe that the citing paper’s context can tell the model what information in cited paper’s abstract is important and vice versa. The structure of the whole model is shown is Figure 1. 6.1 Pointer-Generator Network A typical seq2seq model with attention mechanism has three components: an encoder , a decoder and an attention network. The input text is seen as a sequence of words {w1, w2, ...wn}. The encoder which is a single-layer bidirectional LSTM network receives input words one by one and produces a sequence of encoder hidden states {hi}. At each decoding step t, the decoder which is a single-layer unidirectional LSTM receives the previous word and produces decoder state st. The attention distribution at is calculated as in (Bahdanau et al., 2014): et i = vT tanh(Whhi + Wsst + battn) (1) at = softmax(et) (2) where v, Wh, Ws and battn are learnable parameters. At each decoding step t, the attention vector at is used to calculate the context vector ct: ct = X i at ihi (3) 6186 Citing Paper's Context Encoder Cited Paper's Abstract Encoder Match Matrix relationship vectors Row Attention Matrix Column Attention Matrix dw1 Decoder relationship vectors Attention cw1 softmax on column vectors softmax on row vectors cw2 cw3 aw1 aw2 aw3 dw2 dw3 <s> dw1 dw2 Attention Figure 1: The structure of our generation model The context vector ct and the decoder state st are used to produce the vocabulary distribution Pv: Pv = softmax(V2(V1[st, ct] + b) + b′) (4) where V1, V2, b and b′ are learnable parameters. Pv is a probability distribution over all words in the vocabulary. During training, we use Pv to calculate the cross entropy loss. At each decoding step, this network can generate word like normal seq2seq model or copy word from the source sequence. The generation probability pgen for timestep t is: pgen = σ(W T c ct + W T s st + W T x xt + bptr) (5) where ct is the context vector, st is the decoder state, xt is the decoder input, Wc, Ws, Wx and bptr are learnable parameters and σ is the sigmoid function. pgen is used as a soft switch to choose between generating a word from the vocabulary or copying a word from input sequence. For each text, we define an extended vocabulary which is the union of the vocabulary and all words appearing in the source text. We obtain the following probability distribution over the extended vocabulary: P(w) = pgenPv(w) + (1 −pgen)Σi:wi=wat i (6) Note that if w is not in the vocabulary, Pv(w) is zero. Then we use the probability distribution over the extended vocabulary to calculate the loss. 6.2 Multi-Source Pointer-Generator Network with Cross Attention Then we introduce our generation model. Firstly we change the pointer-generator network to a multi-source pointer-generator network. The multisource pointer-generator network has two encoders and one decoder. The two encoders encode the citing paper’s context and cited paper’s abstract separately. The input context of citing paper is seen as a sequence of words {cw1, cw2, ..., cwn} and the input cited paper’s abstract is seen as a sequence of words {aw1, aw2, ..., awm}. We use the same notation to represent both a word and its embedding vector. The context is encoded by corresponding encoder to a sequence of encoder hidden states {chi} and the cited paper’s abstract is encoded to a sequence of encoder hidden states {ahj}. At each decoding step t, we calculate attention vectors {act i} , {ast i} and corresponding context vectors c1 t , c2 t separately as described in equations (1), (2) and (3). To make the model copy words from both two encoders, we change equation (5) to: [pgen, pcopy1, pcopy2] = softmax(W T c1c1 t + W T c2c2 t + W T s st + W T x xt + bptr) (7) where pgen is the probability of generating words, pcopy1 is the probability of copying words from citing paper’s context and pcopy2 is the probability of copying words from cited paper’s abstract. And equation (6) needs to be changed to: P(w) = pgenPv(w) + pcopy1Σi:cwi=wact i + pcopy2Σi:awi=wast i (8) Then we add the cross attention mechanism to the multi-source pointer-generator network. By making citing paper’s context and cited paper’s abstract attend to each other, we capture the relationships between them. First, we calculate a match matrix M between the sequence of context’s states {chi} and the sequence of cited paper’s abstract’s states {ahj}. The element of the match matrix 6187 Mi,j is: Mi,j = chi · ahj (9) Then we apply softmax function on the row vectors of the matrix and get an attention matrix Arow. The row vector Arow i of the attention matrix is: Arow i = sotmax([Mi,1, Mi,2, ..., Mi,m]) (10) The vector Arow i represents the attention of word cwi to the sequence of words {aw1, aw2, ..., awm}. We also apply softmax function on the column vectors of the matrix and get another attention matrix Acolumn. The column vector of the attention matrix Acolumn i represents the attention of word awi to the sequence of words {cw1, cw2, ..., cwn}. With the two attention matrices, we calculate two special sequences of vectors. The first sequence of vectors {r1, r2, ..., rn} is calculated as: ri = Σm j=1Arow i,j ∗awj (11) The second sequence {q1, q2, ..., qm} is calculated as: qj = Σn i=1Acolumn i,j ∗cwi (12) The vector ri represent what the word cwi thinks about the sequence of words {aw1, aw2, ..., awm}, while the vector qj represents what the word awj thinks about the sequence of words {cw1, cw2, ..., cwn}. We believe that the two sequences of vectors can model the relationship between the input citing paper’s context and cited paper’s abstract, so we call them relationship vectors. With these two sequences of relationship vectors, we calculate two new context vectors c3 t and c4 t separately at each decoding step t, by replacing the encoder hidden state hi with the relationship vector ri or qj in equations (1) (2) and (3). Finally, we calculate the vocabulary distribution with all four context vectors. We just need to change equation (4) to: Pv = softmax(V2(V1[st, c1 t , c2 t , c3 t , c4 t ] + b) + b′) (13) The final probability distribution over the extended vocabulary is still calculated as equation (8). 7 Experiments 7.1 Experimental Setup The baseline models include: RandomSen: It randomly selects a sentence from the abstract of paper B. Models ROUGE-1 ROUGE-2 ROUGE-L RandomSen 15.18 1.37 11.35 MaxSimSen 15.65 1.64 11.45 EXT-ORACLE 22.60 4.21 16.83 COPY-CIT 20.54 3.25 14.79 PTGEN 24.60 6.16 19.19 PTGEN-Cross 26.28 7.50 20.49 Table 5: Comparison results on Explicit dataset ROUGE-1 ROUGE-2 ROUGE-L RandomSen 15.65 1.36 10.98 MaxSimSen 17.70 1.80 12.20 Ext-ORACLE 22.59 3.88 15.97 COPY-CIT 19.32 2.71 13.02 PTGEN 22.83 5.17 18.37 PTGEN-Cross 24.54 5.44 19.21 Table 6: Comparison results on HP dataset ROUGE-1 ROUGE-2 ROUGE-L RandomSen 15.65 1.36 10.98 MaxSimSen 17.70 1.80 12.20 Ext-ORACLE 22.59 3.88 15.97 COPY-CIT 20.08 2.67 13.01 PTGEN 23.26 5.12 18.83 PTGEN-Cross 24.22 6.04 19.38 Table 7: Comparison results on HR dataset MaxSimSen: It selects a sentence from the abstract of paper B, which has the largest similarity with the context of A. EXT-ORACLE: It can be viewed as an upper bound for extractive models. It creates an oracle citation text by selecting the best possible sentence from the abstract of paper B that gives the highest ROUGE with respect to the gold text. COPY-CIT: It randomly copies one citation text from the papers in the training dataset which also cite the paper B. PTGEN: It is a pointer-generator network which allows both copying words via pointing and generating words from a fixed vocabulary. When using this model, we concatenate the citing paper’s context and the cited paper’s abstract as the input sequence. Our proposed model is called PTGEN-Cross. Both our model and the PTGEN has 256dimensional hidden states and 128-dimensional word embeddings. The vocabulary size is set to 50k. At test time the citation texts are produced using beam search with beam size 4. 7.2 Results 7.2.1 Automatic Evaluation We evaluate our models with ROUGE (Lin, 2004), reporting the F1 scores for ROUGE-1, ROUGE-2 6188 Context ...They include entity approaches for local coherence which track the repetition and syntactic realization of entities in adjacent sentences [otherrefer] and content approaches for global coherence which view texts as a sequence of topics, each characterized by a particular distribution of lexical items [otherrefer]. [cit] Early theories [otherrefer] posited that there are three factors which collectively contribute to coherence: intentional structure (purpose of discourse), attentional structure (what items are discussed) and the organization of discourse segments... Abstract We combine lexical, syntactic, and discourse features to produce a highly predictive model of human readers judgments of text readability ... Our experiments indicate that discourse relations are the one class of features that exhibits robustness across these two tasks. Gold Other work has shown that co-occurrence of words [otherrefer] and discourse relations [refer] also predict coherence. PTGEN Recently, approaches [refer] have been suggested to predict the quality of discourse relations. PTGEN-Cross Other work has shown that co-occurrence of sentences [otherrefer ]; [refer] and discourse relations [otherrefer] discourse can be used to predict the coherence of sentences in texts. Table 8: Example output citation texts Gold PTGEN PTGEN-Cross Readability 4.89 3.77 3.79 Content 4.42 2.76 2.77 Coherence 4.41 2.70 2.85 Overall 4.55 2.84 2.91 Table 9: Human evaluation results and ROUGE-L. The test results on three datasets are shown in Tables 5, 6 and 7, respectively. On all three datasets, extractive models perform poorly. Our baseline generation model PTGEN outperforms EXT-ORACLE which can be seen as a ’perfect’ extractive system. This is completely different from how these models preform on other summarization tasks like news document summarization. We believe it shows the particularity of this task. It not only requires the model to capture the important content of the cited paper, but also requires the model to capture the attitude of the citing paper to the cited paper. The model not only needs to generate fluent and informative text, but also needs to ensure the contextual coherence. Our proposed model PTGEN-Cross obviously outperforms the baseline model PTGEN. This proves the effectiveness of the cross attention mechanism. We think the cross attention mechanism helps the model capture the relationship between the citing paper’s context and the cited paper’s abstract. The results on explicit citation text generation dataset are all higher than the results on the other two datasets, which means the task of explicit citation text generation is easier than the task of full citation text generation. We think it is because the context of explicit citation sometimes contains some implicit citation sentences and these sentences can be very helpful to the generation of explicit citation text. Another possible reason is that the quality of the training dataset for explicit citation generation is higher than the other two training datasets. Because the test data of the two full citation text generation datasets is the same, we can compare the results of our model training on the two datasets. The model trained on the highrecall dataset performs slightly better. This tells us the coverage ability of the implicit citation extraction model is more important when constructing training dataset for citation generation. 7.2.2 Human Evaluation We randomly sample 50 instances from the highrecall test set and perform human evaluation on them. Three graduate students are employed to rate the citation text produced by each method in four aspects: readability (whether the citation text is fluent), content (whether the citation text is relevant to the cited paper’s abstract), coherence (whether the citation text is coherent with the citing paper’s context) and overall quality. The rating score ranges from 1 to 5, and 1 means very bad and 5 means very good. Note that every text is scored by three judges and we take take the average of three scores. The results are shown in Table 9. As is shown in the table, our model outperforms the baseline model, especially with respect to the coherence and overall aspects. This further demonstrates the efficacy of our proposed model. We show an example of generation in Table 8. Note that all reference signs to the cited paper are masked as ’[refer]’ and all reference signs to other papers are masked as ’[otherrefer]’. The ’[cit]’ in bold in context indicates the position the citation text should be. We can see that the citation text generated by our model is more contextual coherent because it can capture the relationship between context and the cited paper’s abstract better. 6189 8 Conclusion and Future Work In this paper we investigate the challenging task of automatic generation of citation texts in scholarly papers. We annotate a dataset and train an implicit citation extraction model to automatically enlarge the training data. we then propose the multi-source pointer-generation network with cross attention mechanism to deal with this task. Empirical evaluation results on three datasets verify the efficacy of our proposed method. In future work, we will consider introducing more information like the citation texts to the cited paper in other papers to help the generation. Acknowledgments This work was supported by National Natural Science Foundation of China (61772036), Tencent AI Lab Rhino-Bird Focused Research Program (No.JR201953) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers for their helpful comments. Xiaojun Wan is the corresponding author. References Awais Athar and Simone Teufel. 2012. Detection of implicit citations for sentiment detection. In Proceedings of the Workshop on Detecting Structure in Scholarly Discourse, pages 18–26, Jeju Island, Korea. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Phyllis B Baxendale. 1958. Machine-made index for technical literaturean experiment. IBM Journal of research and development, 2(4):354–361. Jingqiang Chen and Hai Zhuge. 2014. Summarization of scientific documents by detecting common facts in citations. Future Generation Computer Systems, 32:246–252. Jingqiang Chen and Hai Zhuge. 2019. Automatic generation of related work through summarizing citations. Concurrency and Computation: Practice and Experience, 31(3):e4261. Arman Cohan and Nazli Goharian. 2018. Scientific document summarization via citation contextualization and scientific discourse. International Journal on Digital Libraries, 19(2-3):287–303. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Harold P Edmundson. 1969. New methods in automatic extracting. Journal of the ACM (JACM), 16(2):264–285. Cong Duy Vu Hoang and Min-Yen Kan. 2010. Towards automated related work summarization. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 427–435. Association for Computational Linguistics. Yue Hu and Xiaojun Wan. 2014. Automatic generation of related work sections in scientific papers: an optimization approach. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1624–1633. Kokil Jaidka, Muthu Kumar Chandrasekaran, Sajal Rustagi, and Min-Yen Kan. 2016. Overview of the cl-scisumm 2016 shared task. In Proceedings of the Joint Workshop on Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL), pages 93–102. Chaker Jebari, Manuel Jes´us Cobo, and Enrique Herrera-Viedma. 2018. A new approach for implicit citation extraction. In International Conference on Intelligent Data Engineering and Automated Learning, pages 121–129. Springer. Dain Kaplan, Ryu Iida, and Takenobu Tokunaga. 2009. Automatic extraction of citation contexts for research paper summarization: A coreference-chain based approach. In Proceedings of the 2009 Workshop on Text and Citation Analysis for Scholarly Digital Libraries (NLPIR4DL), pages 88–95. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Wei Lu, Hwee Tou Ng, Wee Sun Lee, and Luke S Zettlemoyer. 2008. A generative model for parsing natural language to meaning representations. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 783–792. Association for Computational Linguistics. Hans Peter Luhn. 1958. The automatic creation of literature abstracts. IBM Journal of research and development, 2(2):159–165. Qiaozhu Mei and ChengXiang Zhai. 2008. Generating impact-based summaries for scientific literature. In Proceedings of ACL-08: HLT, pages 816–824. Saif Mohammad, Bonnie Dorr, Melissa Egan, Ahmed Hassan, Pradeep Muthukrishan, Vahed Qazvinian, Dragomir Radev, and David Zajic. 2009. Using citations to generate surveys of scientific paradigms. In Proceedings of human language technologies: The 2009 annual conference of the North American 6190 chapter of the association for computational linguistics, pages 584–592. Association for Computational Linguistics. Vahed Qazvinian and Dragomir R Radev. 2008. Scientific paper summarization using citation summary networks. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 689–696. Association for Computational Linguistics. Vahed Qazvinian and Dragomir R Radev. 2010. Identifying non-explicit citing sentences for citation-based summarization. In Proceedings of the 48th annual meeting of the association for computational linguistics, pages 555–564. Association for Computational Linguistics. Dragomir R Radev, Pradeep Muthukrishnan, Vahed Qazvinian, and Amjad Abu-Jbara. 2013. The acl anthology network corpus. Language Resources and Evaluation, 47(4):919–944. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368. Henry Small. 2011. Interpreting maps of science using citation context sentiments: a preliminary investigation. Scientometrics, 87(2):373–388. Parikshit Sondhi and ChengXiang Zhai. 2014. A constrained hidden markov model approach for nonexplicit citation context extraction. In Proceedings of the 2014 SIAM International Conference on Data Mining, pages 361–369. SIAM. Yuk Wah Wong and Raymond J Mooney. 2006. Learning for semantic parsing with statistical machine translation. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 439–446. Association for Computational Linguistics. Michihiro Yasunaga, Jungo Kasai, Rui Zhang, Alexander R Fabbri, Irene Li, Dan Friedman, and Dragomir R Radev. 2019. Scisummnet: A large annotated corpus and content-impact models for scientific paper summarization with citation networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7386–7393. Ozge Yeloglu, Evangelos Milios, and Nur ZincirHeywood. 2011. Multi-document summarization of scientific corpora. In Proceedings of the 2011 ACM Symposium on Applied Computing, pages 252–258. ACM.
2020
550
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6191–6196 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6191 Composing Elementary Discourse Units in Abstractive Summarization Zhenwen Li and Wenhao Wu and Sujian Li Key Laboratory of Computational Linguistics(MOE) Department of Computer Science, Peking University {lizhenwen, waynewu, lisujian}@pku.edu.cn Abstract In this paper, we argue that elementary discourse unit (EDU) is a more appropriate textual unit of content selection than the sentence unit in abstractive summarization. To well handle the problem of composing EDUs into an informative and fluent summary, we propose a novel summarization method that first designs an EDU selection model to extract and group informative EDUs and then an EDU fusion model to fuse the EDUs in each group into one sentence. We also design the reinforcement learning mechanism to use EDU fusion results to reward the EDU selection action, boosting the final summarization performance. Experiments on CNN/Daily Mail have demonstrated the effectiveness of our model. 1 Introduction Abstractive summarization focuses on generating fluent and concise text from the original input document and has achieved considerable performance improvement with the rapid development of deep learning technology (See et al., 2017; Paulus et al., 2017; Celikyilmaz et al., 2018; Gehrmann et al., 2018). In abstractive summarization, the recently popular and practical paradigm usually generates summary sentences by independently compressing or rewriting each pre-extracted sentence, which is from the source documents (Chen and Bansal, 2018; Lebanoff et al., 2019). However, a single document sentence usually cannot provide enough information that a summary sentence expresses, which is supported by the recent study of Lebanoff et al. (2019). They show that a high percentage of summary sentences include information from more than one document sentences, and composing a summary through only compressing sentences can cause performance degradation. Simultaneously, in contrast to the brevity requirements of a summary, each document sentence usually offers trivial details and expresses a relatively independent meaning, posing difficulty of combining multiple sentences into one summary sentence. So we hope to seek a new summary composition unit which is more information-intensive and elementary than sentence. In this paper, we choose to use Elementary Discourse Unit (EDU) as the summarization unit, which is first proposed from Rhetorical Structure Theory (Mann and Thompson, 1988) and defined as a clause. The finer granularity makes EDU more suitable than sentence to be the basic summary composition unit (Li et al., 2016). At the same time, benefited from the development of EDU segmentation technology, which can achieve a high accuracy of 94% (Wang et al., 2018), it is feasible to automatically obtain EDUs from the text. Next, the problems are: (1) which EDUs should be selected to compose a good summary? Moreover, (2) how to well assemble the selected EDUs into a fluent summary? To solve the problems above, we need to extract the information-intensive EDUs from the source documents and effectively fuse the related EDUs into fluent summary sentences. With such an idea, inspired by Chen and Bansal (2018)’s work, we design an abstractive summarization method which is composed of two parts: EDU selection and EDU fusion. EDU selection aims to extract informative EDUs and group them while EDU fusion takes the grouped EDUs as input to generate a sentence. As the EDU selection process lacks labeling training data, we apply the EDU fusion results as the feedback to tune the EDU selection model which in turn influences the EDU fusion process. Here, the actor-critic reinforcement learning algorithm is employed to train our EDU-based summarization method. To the best of our knowledge, we are the first to propose a practical solution to compose EDUs in summarization. Experiments show that compared to previous models, our EDU based model achieves a significant improvement on the 6192 CNN/Daily Mail dataset. 2 Model Our model is mainly composed of two modules: EDU Selection and EDU Fusion. EDU Selection aims to extract salient EDUs from the source document and group the closely related EDUs. Here, we adopt a smart unified end-to-end method to implement both the extraction and grouping. Next, EDU Fusion takes the EDUs in a group to generate a fluent and informative sentence. To train our method, we adopt reinforcement learning to leverage both the two modules. Figure 1 shows the whole architecture of our method. 2.1 EDU Selection The EDU selection model is mainly based on a sequence-to-sequence pointer network. In the encoding stage, we use a hierarchical encoder to get the contextual representation of each EDU, which consists of a word-level temporal convolutional neural network (Kim, 2014) and an EDU-level Bidirectional Long Short-Term Memory Network(BiLSTM) (Hochreiter and Schmidhuber, 1997). In the decoding stage, we design an LSTM decoder to identify the informative EDUs with their group information. To group the related EDUs, we design a particular label truncate whose representation is a trainable parameter htruncate. We also add another special label stop with its representation hstop to determine the end of the selection process. htruncate and hstop are first randomly initialized and then learned in the training process. In each decoding step, the decoder computes a selection probability distribution on EDUs, truncate and stop. Assuming at time step t, the indices of the EDUs which have been extracted are included in the set Selt, the decoder first uses the Luong attention (Luong et al., 2015) to get the context ct and then computes a score st i for each EDU or label by: st i = ( vT p tanh(Wp[ct; hi]) i not in Selt −∞ otherwise (1) where i represents the index of an EDU, truncate or stop, and hi denotes the corresponding representation. vp and Wp are the trainable parameters. In order to avoid repeated selection of the same EDUs, we assign the score of −∞to the EDUs that have been extracted. It is noted that the label truncate can be generated multiple times since it EDU Selection EDU Fusion 1 gt . . k gt . . ) , select2 select1 E f(E reward Policy Gradient Update 1 E 2 E 3 E n E . . EDUs in document summary sentences (ground truth) select1 E . . EDUs selected select2 E truncate selectm −1 E selectm E stop ) , selectm E f(Eselectm −1 generated sentences truncate Figure 1: Overall Architecture of Our Model is not included in Selt. Finally, we get the selection probability at time step t by applying softmax to regularize the scores. Once the decoder selects the stop label, it stops the selection process and gets a sequence which is composed of EDUs, truncate labels and one stop label. Next, the EDUs separated by truncate are grouped for fusion. 2.2 EDU Fusion The EDU fusion module uses the standard pointer generator (See et al., 2017) to generate one sentence for each group of EDUs. This design allows the model to directly copy words from the inputted EDUs to the generated sentence, which is beneficial to keeping the cross-sentence information in the source documents. At the same time, benefited from the conditional language model training objective, the coherence of the generated sentences is highly improved to remedy the poor readability of EDUs. To leverage EDU selection and fusion for generating a good summary, reinforcement learning mechanism is designed to use EDU fusion results to tune the selection process, which in turn affects the fusion performance. We introduce the learning process detailedly in Section 3. 3 Learning We firstly pre-train the EDU selection and EDU fusion module separately and then use the pretrained model as initialization for reinforcement learning(RL). 6193 3.1 Model Pretraining Because the summarization datasets do not label the salient EDUs, we propose a greedy method to provide the labeled data for pre-training. For each pair of the document and summary, we select several groups of EDUs from the document as the oracle EDU labels, with each group corresponding to a summary sentence. For each summary sentence, we construct a group of EDUs iteratively. We start from an empty group and repeatedly select the EDU from the document that can maximize the ROUGE-Lrecall score between the ground-truth summary sentence and the group of EDUs after the EDU is added into the group until no EDU can increase the score. We use ROUGE-Lrecall so that the EDU selection module can select as much information as possible for EDU fusion. With such a dataset, we pre-train the EDU selection module. To pre-train the EDU fusion module, the input and output are the concatenation of oracle EDUs and summary sentences. We pre-train the two modules separately by optimizing maximum likelihood (ML). 3.2 Reinforcement Learning We use the Advantage Actor-Critic (A2C) algorithm to train our model end-to-end. Following Chen and Bansal (2018)’s work, we fix the parameters of the EDU fusion module during RL training. Here, we regard the EDU selection module as the agent whose decoding stage is formulated as a Markov Decision Process (MDP). In each decoding step, the agent executes one selection action, which is selecting an EDU or a label (truncate or stop) according to the selection probability. Then the agent gets a reward according to the EDU fusion results. As for reward computation, given the group i of the selected EDUs, we use the EDU fusion module to generate a sentence si and compute its score ri to measure the overlap between si and the sentence gti in the ground truth summary. ri = ( ROUGE-LF (si, gti) i ≤n 0 i > n (2) where n is the number of sentences in the ground truth summary. For each selection action to compose the group, we set its reward as ri li , where li is the action number of selecting an EDU or truncate. Similar to (Chen and Bansal, 2018), we compute the ROUGE-1F score between the Model R-1 R-2 R-L Lead-3 40.34 17.70 36.57 NN(2016) 35.5 14.7 32.2 REFRESH 40.0 18.2 36.6 Pointer Generator 39.53 17.28 36.38 Fan et al. (2017) 39.75 17.29 36.54 Fast-Abs 40.88 17.80 38.54 EDUSumsel+RL 40.89 18.30 37.79 EDUSum 41.40 18.03 38.79 Table 1: Model Comparison ground-truth summary and the whole fused sentences as the reward for the final action that selects the stop label. 4 Experiments 4.1 Experiment Setup We conduct experiments on the non-anonymized version of the CNN/Daily Mail dataset (Hermann et al., 2015; See et al., 2017). Using the same processing method as See et al. (2017), the dataset contains 287,226 training pairs, 13,368 validation pairs and 11,490 test pairs. To segment the documents into EDUs, we use Wang et al. (2018)’s model which achieves a 94% F-score in EDU segmentation. To evaluate summarization performance, we use the ROUGE metrics (R-1, R-2 and R-L) (Lin, 2004). For our model, the dimensions of hidden states and word embeddings are set 256 and 128 respectively. The batch size of training is 32, and the discount factor for reward in RL training is set to 0.95. The optimizer is Adam (Kingma and Ba, 2015) with a 0.001 learning rate for pre-training and 0.0001 learning rate for RL training. 1 4.2 Results To evaluate model performance, we compare our model (named EDUSum) with the state-of-the-art extractive and abstractive summarization methods. Three extractive methods are a strong Lead-3 baseline, NN (Cheng and Lapata, 2016) which applies neural networks with attention to extract sentences directly, and REFRESH (Narayan et al., 2018) which uses reinforcement learning to rank sentences. Three abstractive methods for comparison include: Pointer Generator (See et al., 2017), a controllable text generation method (Fan et al., 2017), and Fast-Abs (Chen and Bansal, 2018) which uses 1The source code is available at https://github.com/PKUTANGENT/EDUSum 6194 Model R-1 R-2 R-L EDUSumSameSent 41.17 17.84 38.62 EDUSumgroup−1 40.02 17.21 37.76 EDUSumgroup−2 41.09 17.59 38.54 EDUSumgroup−3 40.20 17.06 37.53 EDUSum 41.40 18.03 38.79 Table 2: Ablation Study on EDU Selection Module reinforcement learning to extract and rewrite sentences. As we can see in Table 1, EDUSum outperforms all the baselines. Compared to Fast-Abs which is similar to EDUSum in model architecture, EDUSum achieves better performance with respect to the three metrics, showing EDU is more informative than sentence and appropriate to be the basic selection unit in summarization. From the table, we can also see that all the summarization methods with RL achieve comparable performance, meaning the RL mechanism can effectively supervise a system to acquire valuable information. We also design a model EDUSumsel+RL which is similar to EDUSum except that it does not include the EDU fusion module and directly concatenates the selected EDUs as a summary. EDUSumsel+RL performs worse with respect to R-1 and R-L when the EDU fusion module is removed, because the direct concatenation of EDUs may bring redundancy into the summary and EDU fusion can make the summary sentence more informative. We also note that EDUSumsel+RL performs better than EDUSum with respect to R-2, perhaps because EDU fusion may generate some fake information and need further improvement which will be our future work. Further, we conduct a thorough analysis of the EDU selection module which is the main component of our method. Compared to previous work, the EDU selection module can automatically determine which EDUs and how many EDUs can be grouped. Such a design is convenient for capturing cross-sentence information effectively. To evaluate whether it is necessary to capture crosssentence information in summarization, we add a constraint to our model: the EDU selection module can only select those EDUs that belong to the same sentence into the same group. We name this model EDUSumSameSent. From Table 2, we can see that EDUSumSameSent behaves a little worse than EDUSum. This makes sense because the content of each summary sentence mostly derive from one source sentence and is supplemented by some inforModel Read. Non-redund. Fast-Abs 1.86 2.1 EDUSumsel+RL 2.22 1.94 EDUSum 1.92 1.96 Table 3: Human Evaluation. The smaller value of the metric of the average rank, the better the performance. mation from other sentences. We also evaluate the grouping effects of our model and remove the automatic grouping mechanism by grouping every K adjacent selected EDUs into a group. We set K as 1, 2, and 3 respectively where the value of 1 means no group at all. Table 2 shows EDUSumgroup−2 performs the best among all the size settings, but performs worse than EDUSum and EDUSumSamesent. This means that a summary sentence is usually composed of two EDUs but a hard grouping can degrade the performance. We also give a summary sentence generated by our method as an example to illustrate the advantage of our model, as in Figure 2. We can see that our model can well select and group the EDUs (the underlined EDUs in Sent. 1 and Sent. 2) which have similar meanings, and fuse the grouped EDUs coherently by grabbing the key entity information (i.e., person and team information in Sent. 1) and combining them into the final summary sentence. 4.3 Human Evaluation To evaluate the abstractive ability of our method, we conduct a human evaluation on the two aspects of readability and non-redundancy. Readability measures how easy a text is to read, and depends on the elements of grammaticality and coherence. Non-redundancy mainly denotes the degree of linguistic brevity of a text in conveying the main idea. To save labor, we only choose two baselines FastAbs and EDUSumsel+RL, which perform well with ROUGE metrics, for comparison. Comparing to scoring, ranking is relatively easy for an annotator to implement and we follow the evaluation method of (Wu and Hu, 2018). We randomly sample 50 test documents and generate their summaries using our model and the two baselines. Three annotators are asked to rank each set of three summaries with respect to readability and non-redundancy. The best is ranked the first while the worst is the third, and the ranks are allowed to be tied. Then we compute the average ranks of the three models, as shown in Table 3. We see that EDUSum can 6195 Original sentences segmented into EDUs System-generated sentence Ground truth Sent 1: [Juan Mata has collected his player of the month award for March from Manchester United] [and was quick to thank his supporters after receiving the gong .] Sent 2: [Mata scored both goals as united overturned Liverpool with a 2-1 win at Anfiled.] [while also producing an impressive display in the 3-0 home victory over Tottenham] Juan Mata scored both goals as Manchester United overturned Liverpool's 2-1 win at Anfield. Juan Mata scored both times as Manchester United beat Liverpool 2-1. Figure 2: An example of a generated summary sentence that is fused by cross-sentence EDUs. well leverage readability and non-redundancy compared to the two baselines. Both EDUSum and EDUSumsel+RL achieve a significant improvement in non-redundancy, because the fine-grained EDUs can contain more informative cross-sentence information and make the summaries briefer. We can also see EDUSumsel+RL suffers from bad readability because it simply concatenates EDUs into a sentence, which is the main problem that EDU based models are faced with. As for EDUSum, benefited from EDU fusion, this model can achieve nearly the same readability as the sentence based model Fast-Abs. 5 Conclusions In this paper, we choose EDU as the basic summary unit and propose a novel EDU based summarization model EDUSum. In our model, the module of EDU selection is designed to extract and group salient EDUs and the module of EDU fusion to convert groups of EDUs into summary sentences. We also apply reinforcement learning to leverage EDU selection and EDU fusion for improving summarization performance. With such a design, EDUSum can fuse cross-sentence information and remedy the poor readability problem brought by EDUs. Compared to previous work, this work has provided a feasible and effective method which makes full use of EDUs in summarization. Acknowledgements We thank the anonymous reviewers for their helpful comments on this paper. This work was partially supported by National Key Research and Development Project (2019YFB1704002) and National Natural Science Foundation of China (61876009 and 61572049). The corresponding author of this paper is Sujian Li. References Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. arXiv preprint arXiv:1803.10357. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675–686, Melbourne, Australia. Association for Computational Linguistics. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 484–494, Berlin, Germany. Association for Computational Linguistics. Angela Fan, David Grangier, and Michael Auli. 2017. Controllable abstractive summarization. arXiv preprint arXiv:1711.05217. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109, Brussels, Belgium. Association for Computational Linguistics. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1693–1701. Curran Associates, Inc. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735– 1780. 6196 Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar. Association for Computational Linguistics. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Logan Lebanoff, Kaiqiang Song, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, and Fei Liu. 2019. Scoring sentence singletons and pairs for abstractive summarization. arXiv preprint arXiv:1906.00077. Junyi Jessy Li, Kapil Thadani, and Amanda Stent. 2016. The role of discourse units in near-extractive summarization. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 137–147, Los Angeles. Association for Computational Linguistics. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proc. ACL workshop on Text Summarization Branches Out, page 10. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421, Lisbon, Portugal. Association for Computational Linguistics. William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text - Interdisciplinary Journal for the Study of Discourse, 8(3), 8(3):243– 281. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1747–1759, New Orleans, Louisiana. Association for Computational Linguistics. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics. Yizhong Wang, Sujian Li, and Jingfeng Yang. 2018. Toward fast and accurate neural discourse segmentation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 962–967, Brussels, Belgium. Association for Computational Linguistics. Yuxiang Wu and Baotian Hu. 2018. Learning to extract coherent summary via deep reinforcement learning. In Thirty-Second AAAI Conference on Artificial Intelligence.
2020
551
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6197–6208 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6197 Extractive Summarization as Text Matching Ming Zhong∗, Pengfei Liu∗, Yiran Chen, Danqing Wang, Xipeng Qiu†, Xuanjing Huang Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China {mzhong18,pfliu14,yrchen19,dqwang18,xpqiu,xjhuang}@fudan.edu.cn Abstract This paper creates a paradigm shift with regard to the way we build neural extractive summarization systems. Instead of following the commonly used framework of extracting sentences individually and modeling the relationship between sentences, we formulate the extractive summarization task as a semantic text matching problem, in which a source document and candidate summaries will be (extracted from the original text) matched in a semantic space. Notably, this paradigm shift to semantic matching framework is well-grounded in our comprehensive analysis of the inherent gap between sentence-level and summary-level extractors based on the property of the dataset. Besides, even instantiating the framework with a simple form of a matching model, we have driven the state-of-the-art extractive result on CNN/DailyMail to a new level (44.41 in ROUGE-1). Experiments on the other five datasets also show the effectiveness of the matching framework. We believe the power of this matching-based summarization framework has not been fully exploited. To encourage more instantiations in the future, we have released our codes, processed dataset, as well as generated summaries in https://github. com/maszhongming/MatchSum. 1 Introduction The task of automatic text summarization aims to compress a textual document to a shorter highlight while keeping salient information on the original text. In this paper, we focus on extractive summarization since it usually generates semantically and grammatically correct sentences (Dong et al., 2018; Nallapati et al., 2017) and computes faster. Currently, most of the neural extractive summarization systems score and extract sentences (or smaller semantic unit (Xu et al., 2019)) one by ∗These two authors contributed equally. †Corresponding author. Document Candidate Summary Gold Summary extract Semantic Space BERT BERT BERT Figure 1: MATCHSUM framework. We match the contextual representations of the document with gold summary and candidate summaries (extracted from the document). Intuitively, better candidate summaries should be semantically closer to the document, while the gold summary should be the closest. one from the original text, model the relationship between the sentences, and then select several sentences to form a summary. Cheng and Lapata (2016); Nallapati et al. (2017) formulate the extractive summarization task as a sequence labeling problem and solve it with an encoder-decoder framework. These models make independent binary decisions for each sentence, resulting in high redundancy. A natural way to address the above problem is to introduce an auto-regressive decoder (Chen and Bansal, 2018; Jadhav and Rajan, 2018; Zhou et al., 2018), allowing the scoring operations of different sentences to influence on each other. Trigram Blocking (Paulus et al., 2017; Liu and Lapata, 2019), as a more popular method recently, has the same motivation. At the stage of selecting sentences to form a summary, it will skip the sentence that has trigram overlapping with the previously selected sentences. Surprisingly, this simple method of removing duplication brings a remarkable performance improvement on CNN/DailyMail. The above systems of modeling the relationship between sentences are essentially sentence-level extractors, rather than considering the semantics 6198 of the entire summary. This makes them more inclined to select highly generalized sentences while ignoring the coupling of multiple sentences. Narayan et al. (2018b); Bae et al. (2019) utilize reinforcement learning (RL) to achieve summarylevel scoring, but still limited to the architecture of sentence-level summarizers. To better understand the advantages and limitations of sentence-level and summary-level approaches, we conduct an analysis on six benchmark datasets (in Section 3) to explore the characteristics of these two methods. We find that there is indeed an inherent gap between the two approaches across these datasets, which motivates us to propose the following summary-level method. In this paper, we propose a novel summary-level framework (MATCHSUM, Figure 1) and conceptualize extractive summarization as a semantic text matching problem. The principle idea is that a good summary should be more semantically similar as a whole to the source document than the unqualified summaries. Semantic text matching is an important research problem to estimate semantic similarity between a source and a target text fragment, which has been applied in many fields, such as information retrieval (Mitra et al., 2017), question answering (Yih et al., 2013; Severyn and Moschitti, 2015), natural language inference (Wang and Jiang, 2016; Wang et al., 2017) and so on. One of the most conventional approaches to semantic text matching is to learn a vector representation for each text fragment, and then apply typical similarity metrics to compute the matching scores. Specific to extractive summarization, we propose a Siamese-BERT architecture to compute the similarity between the source document and the candidate summary. Siamese BERT leverages the pre-trained BERT (Devlin et al., 2019) in a Siamese network structure (Bromley et al., 1994; Hoffer and Ailon, 2015; Reimers and Gurevych, 2019) to derive semantically meaningful text embeddings that can be compared using cosine-similarity. A good summary has the highest similarity among a set of candidate summaries. We evaluate the proposed matching framework and perform significance testing on a range of benchmark datasets. Our model outperforms strong baselines significantly in all cases and improve the state-of-the-art extractive result on CNN/DailyMail. Besides, we design experiments to observe the gains brought by our framework. We summarize our contributions as follows: 1) Instead of scoring and extracting sentences one by one to form a summary, we formulate extractive summarization as a semantic text matching problem and propose a novel summary-level framework. Our approach bypasses the difficulty of summary-level optimization by contrastive learning, that is, a good summary should be more semantically similar to the source document than the unqualified summaries. 2) We conduct an analysis to investigate whether extractive models must do summary-level extraction based on the property of dataset, and attempt to quantify the inherent gap between sentence-level and summary-level methods. 3) Our proposed framework has achieved superior performance compared with strong baselines on six benchmark datasets. Notably, we obtain a state-of-the-art extractive result on CNN/DailyMail (44.41 in ROUGE-1) by only using the base version of BERT. Moreover, we seek to observe where the performance gain of our model comes from. 2 Related Work 2.1 Extractive Summarization Recent research work on extractive summarization spans a large range of approaches. These work usually instantiate their encoder-decoder framework by choosing RNN (Zhou et al., 2018), Transformer (Zhong et al., 2019b; Wang et al., 2019) or GNN (Wang et al., 2020) as encoder, non-auto-regressive (Narayan et al., 2018b; Arumae and Liu, 2018) or auto-regressive decoders (Jadhav and Rajan, 2018; Liu and Lapata, 2019). Despite the effectiveness, these models are essentially sentence-level extractors with individual scoring process favor the highest scoring sentence, which probably is not the optimal one to form summary1. The application of RL provides a means of summary-level scoring and brings improvement (Narayan et al., 2018b; Bae et al., 2019). However, these efforts are still limited to auto-regressive or non-auto-regressive architectures. Besides, in the non-neural approaches, the Integer Linear Programming (ILP) method can also be used for summarylevel scoring (Wan et al., 2015). In addition, there is some work to solve extractive summarization from a semantic perspective before this paper, such as concept coverage (Gillick 1We will quantify this phenomenon in Section 3. 6199 and Favre, 2009), reconstruction (Miao and Blunsom, 2016) and maximize semantic volume (Yogatama et al., 2015). 2.2 Two-stage Summarization Recent studies (Alyguliyev, 2009; Galanis and Androutsopoulos, 2010; Zhang et al., 2019a) have attempted to build two-stage document summarization systems. Specific to extractive summarization, the first stage is usually to extract some fragments of the original text, and the second stage is to select or modify on the basis of these fragments. Chen and Bansal (2018) and Bae et al. (2019) follow a hybrid extract-then-rewrite architecture, with policy-based RL to bridge the two networks together. Lebanoff et al. (2019); Xu and Durrett (2019); Mendes et al. (2019) focus on the extractthen-compress learning paradigm, which will first train an extractor for content selection. Our model can be viewed as an extract-then-match framework, which also employs a sentence extractor to prune unnecessary information. 3 Sentence-Level or Summary-Level? A Dataset-dependent Analysis Although previous work has pointed out the weakness of sentence-level extractors, there is no systematic analysis towards the following questions: 1) For extractive summarization, is the summarylevel extractor better than the sentence-level extractor? 2) Given a dataset, which extractor should we choose based on the characteristics of the data, and what is the inherent gap between these two extractors? In this section, we investigate the gap between sentence-level and summary-level methods on six benchmark datasets, which can instruct us to search for an effective learning framework. It is worth noting that the sentence-level extractor we use here doesn’t include a redundancy removal process so that we can estimate the effect of the summarylevel extractor on redundancy elimination. Notably, the analysis method to estimate the theoretical effectiveness presented in this section is generalized and can be applicable to any summary-level approach. 3.1 Definition We refer to D = {s1, · · · , sn} as a single document consisting of n sentences, and C = {s1, · · · , sk, |si ∈D} as a candidate summary including k (k ≤n) sentences extracted from a document. Given a document D with its gold summary C∗, we measure a candidate summary C by calculating the ROUGE (Lin and Hovy, 2003) value between C and C∗in two levels: 1) Sentence-Level Score: gsen(C) = 1 |C| X s∈C R(s, C∗), (1) where s is the sentence in C and |C| represents the number of sentences. R(·) denotes the average ROUGE score2. Thus, gsen(C) indicates the average overlaps between each sentence in C and the gold summary C∗. 2) Summary-Level Score: gsum(C) = R(C, C∗), (2) where gsum(C) considers sentences in C as a whole and then calculates the ROUGE score with the gold summary C∗. Pearl-Summary We define the pearl-summary to be the summary that has a lower sentence-level score but a higher summary-level score. Definition 1 A candidate summary C is defined as a pearl-summary if there exists another candidate summary C′ that satisfies the inequality: gsen(C′) > gsen(C) while gsum(C′) < gsum(C). Clearly, if a candidate summary is a pearl-summary, it is challenging for sentence-level summarizers to extract it. Best-Summary The best-summary refers to a summary has highest summary-level score among all the candidate summaries. Definition 2 A summary ˆC is defined as the bestsummary when it satisfies: ˆC = argmax C∈C gsum(C), where C denotes all the candidate summaries of the document. 3.2 Ranking of Best-Summary For each document, we sort all candidate summaries3 in descending order based on the sentencelevel score, and then define z as the rank index of the best-summary ˆC. 2Here we use mean F1 of ROUGE-1, ROUGE-2 and ROUGE-L. 3We use an approximate method here: take #Ext (see Table 1) of ten highest-scoring sentences to form candidate summaries. 6200 Datasets Source Type # Pairs # Tokens # Ext Train Valid Test Doc. Sum. Reddit Social Media SDS 41,675 645 645 482.2 28.0 2 XSum News SDS 203,028 11,273 11,332 430.2 23.3 2 CNN/DM News SDS 287,084 13,367 11,489 766.1 58.2 3 WikiHow Knowledge Base SDS 168,126 6,000 6,000 580.8 62.6 4 PubMed Scientific Paper SDS 83,233 4,946 5,025 444.0 209.5 6 Multi-News News MDS 44,972 5,622 5,622 487.3 262.0 9 Table 1: Datasets overview. SDS represents single-document summarization and MDS represents multi-document summarization. The data in Doc. and Sum. indicates the average length of document and summary in the test set respectively. # Ext denotes the number of sentences should extract in different datasets. (a) Reddit (b) XSum (c) CNN/DM (d) WikiHow (e) PubMed (f) Multi-News Figure 2: Distribution of z(%) on six datasets. Because the number of candidate summaries for each document is different (short text may have relatively few candidates), we use z / number of candidate summaries as the X-axis. The Y-axis represents the proportion of the best-summaries with this rank in the test set. Intuitively, 1) if z = 1 ( ˆC comes first), it means that the best-summary is composed of sentences with the highest score; 2) If z > 1, then the bestsummary is a pearl-summary. And as z increases ( ˆC gets lower rankings), we could find more candidate summaries whose sentence-level score is higher than best-summary, which leads to the learning difficulty for sentence-level extractors. Since the appearance of the pearl-summary will bring challenges to sentence-level extractors, we attempt to investigate the proportion of pearlsummary in different datasets on six benchmark datasets. A detailed description of these datasets is displayed in Table 1. As demonstrated in Figure 2, we can observe that for all datasets, most of the best-summaries are not made up of the highest-scoring sentences. Specifically, for CNN/DM, only 18.9% of best-summaries are not pearl-summary, indicating sentence-level extractors will easily fall into a local optimization, missing better candidate summaries. Different from CNN/DM, PubMed is most suitable for sentence-level summarizers, because most of best-summary sets are not pearl-summary. Additionally, it is challenging to achieve good performance on WikiHow and Multi-News without a summary-level learning process, as these two datasets are most evenly distributed, that is, the appearance of pearl-summary makes the selection of the best-summary more complicated. In conclusion, the proportion of the pearlsummaries in all the best-summaries is a property to characterize a dataset, which will affect our choices of summarization extractors. 3.3 Inherent Gap between Sentence-Level and Summary-Level Extractors Above analysis has explicated that the summarylevel method is better than the sentence-level method because it can pick out pearl-summaries, but how much improvement can it bring given a specific dataset? Based on the definition of Eq. (1) and (2), we can characterize the upper bound of the sentencelevel and summary-level summarization systems for a document D as: 6201 Reddit XSum CNN/DM WikiHow PubMed Multi-News 0 1 2 3 4 5 ∆(D) Figure 3: ∆(D) for different datasets. αsen(D) = max C∈CD gsen(C), (3) αsum(D) = max C∈CD gsum(C), (4) where CD is the set of candidate summaries extracted from D. Then, we quantify the potential gain for a document D by calculating the difference between αsen(D) and αsum(D): ∆(D) = αsum(D) −αsen(D). (5) Finally, a dataset-level potential gain can be obtained as: ∆(D) = 1 |D| X D∈D ∆(D), (6) where D represents a specific dataset and |D| is the number of documents in this dataset. We can see from Figure 3, the performance gain of the summary-level method varies with the dataset and has an improvement at a maximum 4.7 on CNN/DM. From Figure 3 and Table 1, we can find the performance gain is related to the length of reference summary for different datasets. In the case of short summaries (Reddit and XSum), the perfect identification of pearl-summaries does not lead to much improvement. Similarly, multiple sentences in a long summary (PubMed and Multi-News) already have a large degree of semantic overlap, making the improvement of the summary-level method relatively small. But for a medium-length summary (CNN/DM and WikiHow, about 60 words), the summary-level learning process is rewarding. We will discuss this performance gain with specific models in Section 5.4. 4 Summarization as Matching The above quantitative analysis suggests that for most of the datasets, sentence-level extractors are inherently unaware of pearl-summary, so obtaining the best-summary is difficult. To better utilize the above characteristics of the data, we propose a summary-level framework which could score and extract a summary directly. Specifically, we formulate the extractive summarization task as a semantic text matching problem, in which a source document and candidate summaries will be (extracted from the original text) matched in a semantic space. The following section will detail how we instantiate our proposed matching summarization framework by using a simple siamese-based architecture. 4.1 Siamese-BERT Inspired by siamese network structure (Bromley et al., 1994), we construct a Siamese-BERT architecture to match the document D and the candidate summary C. Our Siamese-BERT consists of two BERTs with tied-weights and a cosine-similarity layer during the inference phase. Unlike the modified BERT used in (Liu, 2019; Bae et al., 2019), we directly use the original BERT to derive the semantically meaningful embeddings from document D and candidate summary C since we need not obtain the sentence-level representation. Thus, we use the vector of the ‘[CLS]’ token from the top BERT layer as the representation of a document or summary. Let rD and rC denote the embeddings of the document D and candidate summary C. Their similarity score is measured by f(D, C) = cosine(rD, rC). In order to fine-tune Siamese-BERT, we use a margin-based triplet loss to update the weights. Intuitively, the gold summary C∗should be semantically closest to the source document, which is the first principle our loss should follow: L1 = max(0, f(D, C) −f(D, C∗) + γ1), (7) where C is the candidate summary in D and γ1 is a margin value. Besides, we also design a pairwise margin loss for all the candidate summaries. We sort all candidate summaries in descending order of ROUGE scores with the gold summary. Naturally, the candidate pair with a larger ranking gap should have a larger margin, which is the second principle to design our loss function: L2 = max(0, f(D, Cj) −f(D, Ci) + (j −i) ∗γ2) (i < j), (8) 6202 where Ci represents the candidate summary ranked i and γ2 is a hyperparameter used to distinguish between good and bad candidate summaries. Finally, our margin-based triplet loss can be written as: L = L1 + L2. (9) The basic idea is to let the gold summary have the highest matching score, and at the same time, a better candidate summary should obtain a higher score compared with the unqualified candidate summary. Figure 1 illustrate this idea. In the inference phase, we formulate extractive summarization as a task to search for the best summary among all the candidates C extracted from the document D. ˆC = arg max C∈C f(D, C). (10) 4.2 Candidates Pruning Curse of Combination The matching idea is more intuitive while it suffers from combinatorial explosion problems. For example, how could we determine the size of the candidate summary set or should we score all possible candidates? To alleviate these difficulties, we propose a simple candidate pruning strategy. Concretely, we introduce a content selection module to pre-select salient sentences. The module learns to assign each sentence a salience score and prunes sentences irrelevant with the current document, resulting in a pruned document D ′ = {s ′ 1, · · · , s ′ ext|s ′ i ∈D}. Similar to much previous work on two-stage summarization, our content selection module is a parameterized neural network. In this paper, we use BERTSUM (Liu and Lapata, 2019) without trigram blocking (we call it BERTEXT) to score each sentence. Then, we use a simple rule to obtain the candidates: generating all combinations of sel sentences subject to the pruned document, and reorganize the order of sentences according to the original position in the document to form candidate summaries. Therefore, we have a total of ext sel  candidate sets. 5 Experiment 5.1 Datasets In order to verify the effectiveness of our framework and obtain more convicing explanations, we perform experiments on six divergent mainstream datasets as follows. Reddit XSum CNN/DM Wiki PubMed M-News Ext 5 5 5 5 7 10 Sel 1, 2 1, 2 2, 3 3, 4, 5 6 9 Size 15 15 20 16 7 9 Table 2: Details about the candidate summary for different datasets. Ext denotes the number of sentences after we prune the original document, Sel denotes the number of sentences to form a candidate summary and Size is the number of final candidate summaries. CNN/DailyMail (Hermann et al., 2015) is a commonly used news summarization dataset modified by Nallapati et al. (2016). PubMed (Cohan et al., 2018) is collected from scientific papers. We modify this dataset by using the introduction section as the document and the abstract section as the corresponding summary. WikiHow (Koupaee and Wang, 2018) is a diverse dataset extracted from an online knowledge base. XSum (Narayan et al., 2018a) is a one-sentence summary dataset to answer the question “What is the article about?”. Multi-News (Fabbri et al., 2019) is a multi-document news summarization dataset, we concatenate the source documents as a single input. Reddit (Kim et al., 2019) is a highly abstractive dataset collected from social media platform. We use the TIFU-long version of Reddit. 5.2 Implementation Details We use the base version of BERT to implement our models in all experiments. Adam optimizer (Kingma and Ba, 2014) with warming-up is used and our learning rate schedule follows Vaswani et al. (2017) as: lr = 2e−3 · min(step−0.5, step · wm−1.5), (11) where each step is a batch size of 32 and wm denotes warmup steps of 10,000. We choose γ1 = 0 and γ2 = 0.01. When γ1<0.05 and 0.005<γ2<0.05 they have little effect on performance, otherwise they will cause performance degradation. We use the validation set to save three best checkpoints during training, and record the performance of the best checkpoints on the test set. Importantly, all the experimental results listed in this paper are the average of three runs. To obtain a Siamese-BERT model on CNN/DM, we use 8 TeslaV100-16G GPUs for about 30 hours of training. For datasets, we remove samples with empty document or summary and truncate the document 6203 Model R-1 R-2 R-L LEAD 40.43 17.62 36.67 ORACLE 52.59 31.23 48.87 MATCH-ORACLE 51.08 26.94 47.22 BANDITSUM (Dong et al., 2018) 41.50 18.70 37.60 NEUSUM (Zhou et al., 2018) 41.59 19.01 37.98 JECS (Xu and Durrett, 2019) 41.70 18.50 37.90 HIBERT (Zhang et al., 2019b) 42.37 19.95 38.83 PNBERT (Zhong et al., 2019a) 42.39 19.51 38.69 PNBERT + RL 42.69 19.60 38.85 BERTEXT† (Bae et al., 2019) 42.29 19.38 38.63 BERTEXT† + RL 42.76 19.87 39.11 BERTEXT (Liu, 2019) 42.57 19.96 39.04 BERTEXT + Tri-Blocking 43.23 20.22 39.60 BERTSUM∗(Liu and Lapata, 2019) 43.85 20.34 39.90 BERTEXT (Ours) 42.73 20.13 39.20 BERTEXT + Tri-Blocking (Ours) 43.18 20.16 39.56 MATCHSUM (BERT-base) 44.22 20.62 40.38 MATCHSUM (RoBERTa-base) 44.41 20.86 40.55 Table 3: Results on CNN/DM test set. The model with ∗indicates that the large version of BERT is used. BERTEXT† add an additional Pointer Network compared to other BERTEXT in this table. to 512 tokens, therefore ORACLE in this paper is calculated on the truncated datasets. Details of candidate summary for the different datasets can be found in Table 2. 5.3 Experimental Results Results on CNN/DM As shown in Table 3, we list strong baselines with different learning approaches. The first section contains LEAD, ORACLE and MATCH-ORACLE4. Because we prune documents before matching, MATCH-ORACLE is relatively low. We can see from the second section, although RL can score the entire summary, it does not lead to much performance improvement. This is probably because it still relies on the sentence-level summarizers such as Pointer network or sequence labeling models, which select sentences one by one, rather than distinguishing the semantics of different summaries as a whole. Trigram Blocking is a simple yet effective heuristic on CNN/DM, even better than all redundancy removal methods based on neural models. 4LEAD and ORACLE are common baselines in the summarization task. The former means extracting the first several sentences of a document as a summary, the latter is the groundtruth used in extractive models training. MATCHORACLE is the groundtruth used to train MATCHSUM. Model R-1 R-2 R-L Reddit BERTEXT (Num = 1) 21.99 5.21 16.99 BERTEXT (Num = 2) 23.86 5.85 19.11 MATCHSUM (Sel = 1) 22.87 5.15 17.40 MATCHSUM (Sel = 2) 24.90 5.91 20.03 MATCHSUM (Sel = 1, 2) 25.09 6.17 20.13 XSum BERTEXT (Num = 1) 22.53 4.36 16.23 BERTEXT (Num = 2) 22.86 4.48 17.16 MATCHSUM (Sel = 1) 23.35 4.46 16.71 MATCHSUM (Sel = 2) 24.48 4.58 18.31 MATCHSUM (Sel = 1, 2) 24.86 4.66 18.41 Table 4: Results on test sets of Reddit and XSum. Num indicates how many sentences BERTEXT extracts as a summary and Sel indicates the number of sentences we choose to form a candidate summary. Compared with these models, our proposed MATCHSUM has outperformed all competitors by a large margin. For example, it beats BERTEXT by 1.51 ROUGE-1 score when using BERT-base as the encoder. Additionally, even compared with the baseline with BERT-large pre-trained encoder, our model MATCHSUM (BERT-base) still perform better. Furthermore, when we change the encoder to RoBERTa-base (Liu et al., 2019), the performance can be further improved. We think the improvement here is because RoBERTa introduced 63 million English news articles during pretraining. The superior performance on this dataset demonstrates the effectiveness of our proposed matching framework. Results on Datasets with Short Summaries Reddit and XSum have been heavily evaluated by abstractive summarizer due to their short summaries. Here, we evaluate our model on these two datasets to investigate whether MATCHSUM could achieve improvement when dealing with summaries containing fewer sentences compared with other typical extractive models. When taking just one sentence to match the original document, MATCHSUM degenerates into a re-ranking of sentences. Table 4 illustrates that this degradation can still bring a small improvement (compared to BERTEXT (Num = 1), 0.88 ∆R-1 on Reddit, 0.82 ∆R-1 on XSum). However, when the number of sentences increases to two and summary-level semantics need to be taken into account, MATCHSUM can obtain a more re6204 Model WikiHow PubMed Multi-News R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L LEAD 24.97 5.83 23.24 37.58 12.22 33.44 43.08 14.27 38.97 ORACLE 35.59 12.98 32.68 45.12 20.33 40.19 49.06 21.54 44.27 MATCH-ORACLE 35.22 10.55 32.87 42.21 15.42 37.67 47.45 17.41 43.14 BERTEXT 30.31 8.71 28.24 41.05 14.88 36.57 45.80 16.42 41.53 + 3gram-Blocking 30.37 8.45 28.28 38.81 13.62 34.52 44.94 15.47 40.63 + 4gram-Blocking 30.40 8.67 28.32 40.29 14.37 35.88 45.86 16.23 41.57 MATCHSUM (BERT-base) 31.85 8.98 29.58 41.21 14.91 36.75 46.20 16.51 41.89 Table 5: Results on test sets of WikiHow, PubMed and Multi-News. MATCHSUM beats the state-of-the-art BERT model with Ngram Blocking on all different domain datasets. markable improvement (compared to BERTEXT (Num = 2), 1.04 ∆R-1 on Reddit, 1.62 ∆R-1 on XSum). In addition, our model maps candidate summary as a whole into semantic space, so it can flexibly choose any number of sentences, while most other methods can only extract a fixed number of sentences. From Table 4, we can see this advantage leads to further performance improvement. Results on Datasets with Long Summaries When the summary is relatively long, summarylevel matching becomes more complicated and is harder to learn. We aim to compare the difference between Trigram Blocking and our model when dealing with long summaries. Table 5 presents that although Trigram Blocking works well on CNN/DM, it does not always maintain a stable improvement. Ngram Blocking has little effect on WikiHow and Multi-News, and it causes a large performance drop on PubMed. We think the reason is that Ngram Blocking cannot really understand the semantics of sentences or summaries, just restricts the presence of entities with many words to only once, which is obviously not suitable for the scientific domain where entities may often appear multiple times. On the contrary, our proposed method does not have strong constraints but aligns the document with the summary from semantic space. Experiment results display that our model is robust on all domains, especially on WikiHow, MATCHSUM beats the state-of-the-art model by 1.54 R-1 score. 5.4 Analysis Our analysis here is driven by two questions: 1) Whether the benefits of MATCHSUM are consistent with the property of the dataset analyzed in Section 3? 2) Why have our model achieved different performance gains on diverse datasets? Dataset Splitting Testing Typically, we choose three datasets (XSum, CNN/DM and WikiHow) with the largest performance gain for this experiment. We split each test set into roughly equal numbers of five parts according to z described in Section 3.2, and then experiment with each subset. Figure 4 shows that the performance gap between MATCHSUM and BERTEXT is always the smallest when the best-summary is not a pearlsummary (z = 1). The phenomenon is in line with our understanding, in these samples, the ability of the summary-level extractor to discover pearlsummaries does not bring advantages. As z increases, the performance gap generally tends to increase. Specifically, the benefit of MATCHSUM on CNN/DM is highly consistent with the appearance of pearl-summary. It can only bring an improvement of 0.49 in the subset with the smallest z, but it rises sharply to 1.57 when z reaches its maximum value. WikiHow is similar to CNN/DM, when best-summary consists entirely of highest-scoring sentences, the performance gap is obviously smaller than in other samples. XSum is slightly different, although the trend remains the same, our model does not perform well in the samples with the largest z, which needs further improvement and exploration. From the above comparison, we can see that the performance improvement of MATCHSUM is concentrated in the samples with more pearlsummaries, which illustrates our semantic-based summary-level model can capture sentences that are not particularly good when viewed individually, thereby forming a better summary. Comparison Across Datasets Intuitively, improvements brought by MATCHSUM framework 6205 1 2 3 4 5 1.05 1.1 1.15 1.2 1.25 1.3 z: Small =⇒Large ∆R (a) XSum 1 2 3 4 5 0.4 0.6 0.8 1 1.2 1.4 1.6 z: Small =⇒Large ∆R (b) CNN/DM 1 2 3 4 5 0.8 1 1.2 z: Small =⇒Large ∆R (c) WikiHow Figure 4: Datasets splitting experiment. We split test sets into five parts according to z described in Section 3.2. The X-axis from left to right indicates the subsets of the test set with the value of z from small to large, and the Y-axis represents the ROUGE improvement of MATCHSUM over BERTEXT on this subset. XSum CNN/DM WikiHow PubMed Multi-News 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 ψ(D) Figure 5: ψ of different datasets. Reddit is excluded because it has too few samples in the test set. should be associated with inherent gaps presented in Section 3.3. To better understand their relation, we introduce ∆(D)∗as follows: ∆(D)∗= gsum(CMS) −gsum(CBE), (12) ∆(D)∗= 1 |D| X D∈D ∆(D)∗, (13) where CMS and CBE represent the candidate summary selected by MATCHSUM and BERTEXT in the document D, respectively. Therefore, ∆(D)∗ can indicate the improvement by MATCHSUM over BERTEXT on dataset D. Moreover, compared with the inherent gap between sentence-level and summary-level extractors, we define the ratio that MATCHSUM can learn on dataset D as: ψ(D) = ∆(D)∗/∆(D), (14) where ∆(D) is the inherent gap between sentencelevel and summary-level extractos. It is clear from Figure 5, the value of ψ(D) depends on z (see Figure 2) and the length of the gold summary (see Table 1). As the gold summaries get longer, the upper bound of summary-level approaches becomes more difficult for our model to reach. MATCHSUM can achieve 0.64 ψ(D) on XSum (23.3 words summary), however, ψ(D) is less than 0.2 in PubMed and Multi-News whose summary length exceeds 200. From another perspective, when the summary length are similar, our model performs better on datasets with more pearlsummaries. For instance, z is evenly distributed in Multi-News (see Figure 2), so higher ψ(D) (0.18) can be obtained than PubMed (0.09), which has the least pearl-summaries. A better understanding of the dataset allows us to get a clear awareness of the strengths and limitations of our framework, and we also hope that the above analysis could provide useful clues for future research on extractive summarization. 6 Conclusion We formulate the extractive summarization task as a semantic text matching problem and propose a novel summary-level framework to match the source document and candidate summaries in the semantic space. We conduct an analysis to show how our model could better fit the characteristic of the data. Experimental results show MATCHSUM outperforms the current state-of-the-art extractive model on six benchmark datasets, which demonstrates the effectiveness of our method. Acknowledgment We would like to thank the anonymous reviewers for their valuable comments. This work is supported by the National Key Research and Development Program of China (No. 2018YFC0831103), National Natural Science Foundation of China (No. U1936214 and 61672162), Shanghai Municipal Science and Technology Major Project (No. 2018SHZDZX01) and ZJLab. 6206 References RM Alyguliyev. 2009. The two-stage unsupervised approach to multidocument summarization. Automatic Control and Computer Sciences, 43(5):276. Kristjan Arumae and Fei Liu. 2018. Reinforced extractive summarization with question-focused rewards. In Proceedings of ACL 2018, Student Research Workshop, pages 105–111. Sanghwan Bae, Taeuk Kim, Jihoon Kim, and Sanggoo Lee. 2019. Summary level training of sentence rewriting for abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 10–20. Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard S¨ackinger, and Roopak Shah. 1994. Signature verification using a” siamese” time delay neural network. In Advances in neural information processing systems, pages 737–744. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 675–686. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 484–494. Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), volume 2, pages 615–621. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Yue Dong, Yikang Shen, Eric Crawford, Herke van Hoof, and Jackie Chi Kit Cheung. 2018. Banditsum: Extractive summarization as a contextual bandit. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3739–3748. Alexander Richard Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R. Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In ACL (1), pages 1074–1084. Association for Computational Linguistics. Dimitrios Galanis and Ion Androutsopoulos. 2010. An extractive supervised two-stage method for sentence compression. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 885–893. Association for Computational Linguistics. Dan Gillick and Benoit Favre. 2009. A scalable global model for summarization. In Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing, pages 10–18. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1684–1692. Elad Hoffer and Nir Ailon. 2015. Deep metric learning using triplet network. In International Workshop on Similarity-Based Pattern Recognition, pages 84–92. Springer. Aishwarya Jadhav and Vaibhav Rajan. 2018. Extractive summarization with swap-net: Sentences and words from alternating pointer networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 142–151. Byeongchang Kim, Hyunwoo Kim, and Gunhee Kim. 2019. Abstractive summarization of reddit posts with multi-level memory networks. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2519–2531. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Mahnaz Koupaee and William Yang Wang. 2018. Wikihow: A large scale text summarization dataset. arXiv preprint arXiv:1810.09305. Logan Lebanoff, Kaiqiang Song, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, and Fei Liu. 2019. Scoring sentence singletons and pairs for abstractive summarization. arXiv preprint arXiv:1906.00077. Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram cooccurrence statistics. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 150–157. Yang Liu. 2019. Fine-tune bert for extractive summarization. arXiv preprint arXiv:1903.10318. Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. In Proceedings of 6207 the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3721–3731. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Alfonso Mendes, Shashi Narayan, Sebasti˜ao Miranda, Zita Marinho, Andr´e FT Martins, and Shay B Cohen. 2019. Jointly extracting and compressing documents with summary state representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3955–3966. Yishu Miao and Phil Blunsom. 2016. Language as a latent variable: Discrete generative models for sentence compression. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 319–328. Bhaskar Mitra, Fernando Diaz, and Nick Craswell. 2017. Learning to match using local and distributed representations of text for web search. In Proceedings of the 26th International Conference on World Wide Web, pages 1291–1299. International World Wide Web Conferences Steering Committee. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In Thirty-First AAAI Conference on Artificial Intelligence. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, C¸ a glar Gulc¸ehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. CoNLL 2016, page 280. Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018a. Dont give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807. Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018b. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1747–1759. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Nils Reimers and Iryna Gurevych. 2019. Sentencebert: Sentence embeddings using siamese bertnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3973–3983. Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval, pages 373– 382. ACM. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Xiaojun Wan, Ziqiang Cao, Furu Wei, Sujian Li, and Ming Zhou. 2015. Multi-document summarization via discriminative summary reranking. arXiv preprint arXiv:1507.02062. Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, and Xuan-Jing Huang. 2020. Heterogeneous graph neural networks for extractive document summarization. In Proceedings of the 58th Conference of the Association for Computational Linguistics. Danqing Wang, Pengfei Liu, Ming Zhong, Jie Fu, Xipeng Qiu, and Xuanjing Huang. 2019. Exploring domain shift in extractive text summarization. arXiv preprint arXiv:1908.11664. Shuohang Wang and Jing Jiang. 2016. Learning natural language inference with lstm. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1442–1451. Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 4144–4150. AAAI Press. Jiacheng Xu and Greg Durrett. 2019. Neural extractive text summarization with syntactic compression. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, Hong Kong, China. Association for Computational Linguistics. Jiacheng Xu, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. Discourse-aware neural extractive model for text summarization. arXiv preprint arXiv:1910.14142. Wen-tau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question answering using enhanced lexical semantic models. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1744–1753. 6208 Dani Yogatama, Fei Liu, and Noah A Smith. 2015. Extractive summarization by maximizing semantic volume. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1961–1966. Haoyu Zhang, Yeyun Gong, Yu Yan, Nan Duan, Jianjun Xu, Ji Wang, Ming Gong, and Ming Zhou. 2019a. Pretraining-based natural language generation for text summarization. arXiv preprint arXiv:1902.09243. Xingxing Zhang, Furu Wei, and Ming Zhou. 2019b. Hibert: Document level pre-training of hierarchical bidirectional transformers for document summarization. In ACL. Ming Zhong, Pengfei Liu, Danqing Wang, Xipeng Qiu, and Xuan-Jing Huang. 2019a. Searching for effective neural extractive summarization: What works and whats next. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 1049–1058. Ming Zhong, Danqing Wang, Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2019b. A closer look at data bias in neural extractive summarization models. EMNLP-IJCNLP 2019, page 80. Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural document summarization by jointly learning to score and select sentences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 654–663.
2020
552
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6209–6219 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6209 Heterogeneous Graph Neural Networks for Extractive Document Summarization Danqing Wang∗, Pengfei Liu∗, Yining Zheng, Xipeng Qiu†, Xuanjing Huang Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China {dqwang18,pfliu14,ynzheng19,xpqiu,xjhuang}@fudan.edu.cn Abstract As a crucial step in extractive document summarization, learning cross-sentence relations has been explored by a plethora of approaches. An intuitive way is to put them in the graphbased neural network, which has a more complex structure for capturing inter-sentence relationships. In this paper, we present a heterogeneous graph-based neural network for extractive summarization (HETERSUMGRAPH), which contains semantic nodes of different granularity levels apart from sentences. These additional nodes act as the intermediary between sentences and enrich the cross-sentence relations. Besides, our graph structure is flexible in natural extension from a singledocument setting to multi-document via introducing document nodes. To our knowledge, we are the first one to introduce different types of nodes into graph-based neural networks for extractive document summarization and perform a comprehensive qualitative analysis to investigate their benefits. The code will be released on Github1. 1 Introduction Extractive document summarization aims to extract relevant sentences from the original documents and reorganize them as the summary. Recent years have seen a resounding success in the use of deep neural networks on this task (Cheng and Lapata, 2016; Narayan et al., 2018; Arumae and Liu, 2018; Zhong et al., 2019a; Liu and Lapata, 2019b). These existing models mainly follow the encoder-decoder framework in which each sentence will be encoded by neural components with different forms. To effectively extract the summary-worthy sentences from a document, a core step is to model ∗These two authors contributed equally. †Corresponding author. 1https://github.com/brxx122/ HeterSUMGraph the cross-sentence relations. Most current models capture cross-sentence relations with recurrent neural networks (RNNs) (Cheng and Lapata, 2016; Nallapati et al., 2017; Zhou et al., 2018). However, RNNs-based models are usually hard to capture sentence-level long-distance dependency, especially in the case of the long document or multidocuments. One more intuitive way is to model the relations of sentences using the graph structure. Nevertheless, it is challenging to find an effective graph structure for summarization. Efforts have been made in various ways. Early traditional work makes use of inter-sentence cosine similarity to build the connectivity graph like LexRank (Erkan and Radev, 2004) and TextRank (Mihalcea and Tarau, 2004). Recently, some works account for discourse inter-sentential relationships when building summarization graphs, such as the Approximate Discourse Graph (ADG) with sentence personalization features (Yasunaga et al., 2017) and Rhetorical Structure Theory (RST) graph (Xu et al., 2019). However, they usually rely on external tools and need to take account of the error propagation problem. A more straightforward way is to create a sentence-level fully-connected graph. To some extent, the Transformer encoder (Vaswani et al., 2017) used in recent work(Zhong et al., 2019a; Liu and Lapata, 2019b) can be classified into this type, which learns the pairwise interaction between sentences. Despite their success, how to construct an effective graph structure for summarization remains an open question. In this paper, we propose a heterogeneous graph network for extractive summarization. Instead of solely building graphs on sentence-level nodes, we introduce more semantic units as additional nodes in the graph to enrich the relationships between sentences. These additional nodes act as the intermediary that connects sentences. Namely, each additional node can be viewed as a special rela6210 tionship between sentences containing it. During the massage passing over the heterogeneous graph, these additional nodes will be iteratively updated as well as sentence nodes. Although more advanced features can be used (e.g., entities or topics), for simplicity, we use words as the semantic units in this paper. Each sentence is connected to its contained words. There are no direct edges for all the sentence pairs and word pairs. The constructed heterogeneous wordsentence graph has the following advantages: (a) Different sentences can interact with each other in consideration of the explicit overlapping word information. (b) The word nodes can also aggregate information from sentences and get updated. Unlike ours, existing models usually keep the words unchanged as the embedding layer. (c) Different granularities of information can be fully used through multiple message passing processes. (d) Our heterogeneous graph network is expandable for more types of nodes. For example, we can introduce document nodes for multi-document summarization. We highlight our contributions as follows: (1) To our knowledge, we are the first one to construct a heterogeneous graph network for extractive document summarization to model the relations between sentences, which contains not only sentence nodes but also other semantic units. Although we just use word nodes in this paper, more superior semantic units (e.g. entities) can be incorporated. (2) Our proposed framework is very flexible in extension that can be easily adapt from singledocument to multi-document summarization tasks. (3) Our model can outperform all existing competitors on three benchmark datasets without the pre-trained language models2. Ablation studies and qualitative analysis show the effectiveness of our models. 2 Related Work Extractive Document Summarization With the development of neural networks, great progress has been made in extractive document summarization. Most of them focus on the encoderdecoder framework and use recurrent neural networks (Cheng and Lapata, 2016; Nallapati et al., 2017; Zhou et al., 2018) or Transformer encoders 2Since our proposed model is orthogonal to the methods that using pre-trained models, we believe our model can be further boosted by taking the pre-trained models to initialize the node representations, which we reserve for the future. (Zhong et al., 2019b; Wang et al., 2019a) for the sentential encoding. Recently, pre-trained language models are also applied in summarization for contextual word representations (Zhong et al., 2019a; Liu and Lapata, 2019b; Xu et al., 2019; Zhong et al., 2020). Another intuitive structure for extractive summarization is the graph, which can better utilize the statistical or linguistic information between sentences. Early works focus on document graphs constructed with the content similarity among sentences, like LexRank (Erkan and Radev, 2004) and TextRank (Mihalcea and Tarau, 2004). Some recent works aim to incorporate a relational priori into the encoder by graph neural networks (GNNs) (Yasunaga et al., 2017; Xu et al., 2019). Methodologically, these works only use one type of nodes, which formulate each document as a homogeneous graph. Heterogeneous Graph for NLP Graph neural networks and their associated learning methods (i.e. message passing (Gilmer et al., 2017), selfattention (Velickovic et al., 2017)) are originally designed for the homogeneous graph where the whole graph shares the same type of nodes. However, the graph in the real-world application usually comes with multiple types of nodes (Shi et al., 2016), namely the heterogeneous graph. To model these structures, recent works have made preliminary exploration. Tu et al. (2019) introduced a heterogeneous graph neural network to encode documents, entities and candidates together for multihop reading comprehension. Linmei et al. (2019) focused on semi-supervised short text classification and constructed a topic-entity heterogeneous neural graph. For summarization, Wei (2012) proposes a heterogeneous graph consisting of topic, word and sentence nodes and uses the markov chain model for the iterative update. Wang et al. (2019b) modify TextRank for their graph with keywords and sentences and thus put forward HeteroRank. Inspired by the success of the heterogeneous graph-based neural network on other NLP tasks, we introduce it to extractive text summarization to learn a better node representation. 3 Methodology Given a document D = {s1, · · · , sn} with n sentences, we can formulate extractive summarization as a sequence labeling task as (Narayan et al., 2018; 6211 Word Encoder CNN Encoder BiLSTM Word Word 𝑤3 𝑤1 𝑤2 Sentence Sentence Graph Layer 𝑠1 𝑠2 Sentence Selector TF-IDF Edge Feature Word Node Sentence Node Figure 1: Model Overview. The framework consists of three major modules: graph initializers, the heterogeneous graph layer and the sentence selector. Green circles and blue boxes represent word and sentence nodes respectively. Orange solid lines denote the edge feature (TF-IDF) between word and sentence nodes and the thicknesses indicate the weight. The representations of sentence nodes will be finally used for summary selection. Liu and Lapata, 2019b). Our goal is to predict a sequence of labels y1, · · · , yn (yi ∈{0, 1}) for sentences, where yi = 1 represents the i-th sentence should be included in the summaries. The ground truth labels, which we call ORACLE, is extracted using the greedy approach introduced by Nallapati et al. (2016) with the automatic metrics ROUGE (Lin and Hovy, 2003). Generally speaking, our heterogeneous summarization graph consists of two types of nodes: basic semantic nodes (e.g. words, concepts, etc.) as relay nodes and other units of discourse (e.g. phrases, sentences, documents, etc.) as supernodes. Each supernode connects with basic nodes contained in it and takes the importance of the relation as their edge feature. Thus, high-level discourse nodes can establish relationships between each other via basic nodes. In this paper, we use words as the basic semantic nodes for simplicity. HETERSUMGRAPH in Section 3.1 is a special case which only contains one type of supernodes (sentences) for classification, while HETERDOCSUMGRAPH in Section 3.5 use two (documents and sentences). Based on our framework, other types of supernodes (such as paragraphs) can also be introduced and the only difference lies in the graph structure. 3.1 Document as a Heterogeneous Graph Given a graph G = {V, E}, where V stands for a node set and E represents edges between nodes, our undirected heterogeneous graph can be formally defined as V = Vw ∪Vs and E = {e11, · · · , emn}. Here, Vw = {w1, · · · , wm} denotes m unique words of the document and Vs = {s1, · · · , sn} corresponds to the n sentences in the document. E is a real-value edge weight matrix and eij ̸= 0 (i ∈{1, · · · , m}, j ∈{1, · · · , n}) indicates the j-th sentence contains the i-th word. Figure 1 presents the overview of our model, which mainly consists of three parts: graph initializers for nodes and edges, the heterogeneous graph layer and the sentence selector. The initializers first create nodes and edges and encode them for the document graph. Then the heterogeneous graph updates these node representations by iteratively passing messages between word and sentence nodes via Graph Attention Network (GAT) (Velickovic et al., 2017). Finally, the representations of sentence nodes are extracted to predict labels for summaries. 3.2 Graph Initializers Let Xw ∈Rm×dw and Xs ∈Rn×ds represent the input feature matrix of word and sentence nodes respectively, where dw is the dimension of the word embedding and ds is the dimension of each sentence representation vector. Specifically, we first use Convolutional Neural Networks (CNN) (LeCun et al., 1998) with different kernel sizes to capture the local n-gram feature for each sentence lj and then use the bidirectional Long Short-Term Memory (BiLSTM) (Hochreiter and Schmidhuber, 1997) layer to get the sentence-level feature gj. The concatenation of the CNN local feature and the BiLSTM global feature is used as the sentence node feature Xsj = [lj; gj]. To further include information about the importance of relationships between word and sentence nodes, we infuse TF-IDF values in the edge weights. The term frequency (TF) is the number of times wi occurs in sj and the inverse document frequency (IDF) is made as the inverse function of the out-degree of wi. 6212 3.3 Heterogeneous Graph Layer Given a constructed graph G with node features Xw ∪Xs and edge features E, we use graph attention networks (Velickovic et al., 2017) to update the representations of our semantic nodes. We refer to hi ∈Rdh, i ∈{1, · · · , (m + n)} as the hidden states of input nodes and the graph attention (GAT) layer is designed as follows: zij = LeakyReLU (Wa[Wqhi; Wkhj]) , (1) αij = exp(zij) P l∈Ni exp(zil), (2) ui = σ( X j∈Ni αijWvhj), (3) where Wa, Wq, Wk, Wv are trainable weights and αij is the attention weight between hi and hj. The multi-head attention can be denoted as: ui = ∥K k=1σ  X j∈Ni αk ijWkhi  . (4) Besides, we also add a residual connection to avoid gradient vanishing after several iterations. Therefore, the final output can be represented as: h′ i = ui + hi. (5) We further modify the GAT layer to infuse the scalar edge weights eij, which are mapped to the multi-dimensional embedding space eij ∈ Rmn×de. Thus, Equal 1 is modified as follows: zij = LeakyReLU (Wa[Wqhi; Wkhj; eij]) . (6) After each graph attention layer, we introduce a position-wise feed-forward (FFN) layer consisting of two linear transformations just as Transformer (Vaswani et al., 2017). Iterative updating To pass messages between word and sentence nodes, we define the information propagation as Figure 2. Specifically, after the initialization, we update sentence nodes with their neighbor word nodes via the above GAT and FFN layer: U1 s←w = GAT(H0 s, H0 w, H0 w), (7) H1 s = FFN U1 s←w + H0 s  , (8) where H1 w = H0 w = Xw, H0 s = Xs and U1 s←w ∈ Rm×dh. GAT(H0 s, H0 w, H0 w) denotes that H0 s is used as the attention query and H0 w is used as the key and value. 𝑤3 𝑤1 𝑤2 𝑠1 𝑠2 𝑤3 𝑤1 𝑤2 𝑠1 𝑠2 (a) Update 𝑠1 (b) Update 𝑤1 Figure 2: The detailed update process of word and sentence nodes in Heterogeneous Graph Layer. Green and blue nodes are word and sentence nodes involved in this turn. Orange edges indicate the current information flow direction. First, for sentence s1, word w1 and w3 are used to aggregate word-level information in (a). Next, w1 is updated by the new representation of s1 and s2 in (b), which are the sentences it occurs. See Section 3.3 for details on the notation. After that, we obtain new representations for word nodes using the updated sentence nods and further update sentence nodes iteratively. Each iteration contains a sentence-to-word and a wordto-sentence update process. For the t-th iteration, the process can be represented as: Ut+1 w←s = GAT(Ht w, Ht s, Ht s), (9) Ht+1 w = FFN Ut+1 w←s + Ht w  , (10) Ut+1 s←w = GAT(Ht s, Ht+1 w , Ht+1 w ), (11) Ht+1 s = FFN Ut+1 s←w + Ht s  . (12) As Figure 2 shows, word nodes can aggregate the document-level information from sentences. For example, the high degree of a word node indicates the word occurs in many sentences and is likely to be the keyword of the document. Regarding sentence nodes, the one with more important words tends to be selected as the summary. 3.4 Sentence Selector Finally, we need to extract sentence nodes included in the summary from the heterogeneous graph. Therefore, we do node classification for sentences and cross-entropy loss is used as the training objective for the whole system. Trigram blocking Following Paulus et al. (2017) and Liu and Lapata (2019b), we use Trigram Blocking for decoding, which is simple but powerful version of Maximal Marginal Relevance (Carbonell and Goldstein, 1998). Specifically, we rank sentences by their scores and discard those which have trigram overlappings with their predecessors. 6213 𝑤! 𝑤" 𝑤# 𝑠"! 𝑠"" 𝑑! 𝑑" 𝑠!! 𝑠!" 𝑤$ Figure 3: Graph structure of HETERDOCSUMGRAPH for multi-document summarization (corresponding to the Graph Layer part of Figure 1). Green, blue and orange boxes represent word, sentence and document nodes respectively. d1 consists of s11 and s12 while d2 contains s21 and s22. As a relay node, the relation of document-document, sentence-sentence, and sentencedocument can be built through the common word nodes. For example, sentence s11, s12 and s21 share the same word w1, which connects them across documents. 3.5 Multi-document Summarization For multi-document summarization, the documentlevel relation is crucial for better understanding the core topic and most important content of this cluster. However, most existing neural models ignore this hierarchical structure and concatenate documents to a single flat sequence(Liu et al., 2018; Fabbri et al., 2019). Others try to model this relation by attention-based full-connected graph or take advantage of similarity or discourse relations(Liu and Lapata, 2019a). Our framework can establish the document-level relationship in the same way as the sentence-level by just adding supernodes for documents(as Figure 3), which means it can be easily adapted from single-document to multi-document summarization. The heterogeneous graph is then extended to three types of nodes: V = Vw ∪Vs ∪Vd and Vd = {d1, · · · , dl} and l is the number of source documents. We name it as HETERDOCSUMGRAPH. As we can see in Figure 3, word nodes become the bridges between sentences and documents. Sentences containing the same words connect with each other regardless of their distance across documents, while documents establish relationships based on their similar contents. Document nodes can be viewed as a special type of sentence nodes: a document node connects with contained word nodes and the TF-IDF value is used as the edge weight. Besides, document nodes also share the same update process as sentence nodes. The differences lie in the initialization, where the document node takes the mean-pooling of its sentence node features as its initial state. During the sentence selection, the sentence nodes are concatenated with the corresponding document representations to obtain the final scores for multi-document summarization. 4 Experiment We evaluate our models both on single- and multidocument summarization tasks. Below, we start our experiment with the description of the datasets. 4.1 Datasets CNN/DailyMail The CNN/DailyMail question answering dataset (Hermann et al., 2015; Nallapati et al., 2016) is the most widely used benchmark dataset for single-document summarization. The standard dataset split contains 287,227/13,368/11,490 examples for training, validation, and test. For the data prepossessing, we follow Liu and Lapata (2019b), which use the nonanonymized version as See et al. (2017), to get ground-truth labels. NYT50 NYT50 is also a single-document summarization dataset, which was collected from New York Times Annotated Corpus (Sandhaus, 2008) and preprocessed by Durrett et al. (2016). It contains 110,540 articles with summaries and is split into 100,834 and 9706 for training and test. Following Durrett et al. (2016), we use the last 4,000 examples from the training set as validation and filter test examples to 3,452. Multi-News The Multi-News dataset is a largescale multi-document summarization introduced by Fabbri et al. (2019). It contains 56,216 articlessummary pairs and each example consists of 2-10 source documents and a human-written summary. Following their experimental settings, we split the dataset into 44,972/5,622/5,622 for training, validation and test examples and truncate input articles to 500 tokens. 4.2 Settings and Hyper-parameters For both single-document and multi-document summarization, we limit the vocabulary to 50,000 and initialize tokens with 300-dimensional GloVe embeddings (Pennington et al., 2014). We filter stop words and punctuations when creating word 6214 nodes and truncate the input document to a maximum length of 50 sentences. To get rid of the noisy common words, we further remove 10% of the vocabulary with low TF-IDF values over the whole dataset. We initialize sentence nodes with ds = 128 and edge features eij in GATe with de = 50. Each GAT layer is 8 heads and the hidden size is dh = 64, while the inner hidden size of FFN layers is 512. During training, we use a batch size of 32 and apply Adam optimizer (Kingma and Ba, 2014) with a learning rate 5e-4. An early stop is performed when valid loss does not descent for three continuous epochs. We select the number of iterations t = 1 based on the performance on the validation set.3 For decoding, we select top-3 sentences for CNN/DailyMail and NYT50 datasets and top-9 for Multi-New according to the average length of their human-written summaries. 4.3 Models for Comparison Ext-BiLSTM Extractive summarizer with BiLSTM encoder learns the cross-sentence relation by regarding a document as a sequence of sentences. For simplification, we directly take out the initialization of sentence nodes for classification, which includes a CNN encoder for the word level and 2layer BiLSTM for sentence level. This model can also be viewed as an ablation study of our HETERSUMGRAPH on the updating of sentence nodes. Ext-Transformer Extractive summarizers with Transformer encoder learn the pairwise interaction (Vaswani et al., 2017) between sentences in a purely data-driven way with a fully connected priori. Following (Liu and Lapata, 2019b), we implement a Transformer-based extractor as a baseline, which contains the same encoder for words followed by 12 Transformer encoder layers for sentences. ExtTransformer can be regarded as the sentence-level fully connected graph. HETERSUMGRAPH Our heterogeneous summarization graph model relations between sentences based on their common words, which can be denoted as sentence-word-sentence relationships. HETERSUMGRAPH directly selects sentences for the summary by node classification, while HETERSUMGRAPH with trigram blocking further utilizes the n-gram blocking to reduce redundancy. 3The detailed experimental results are attached in the Appendix Section. Model R-1 R-2 R-L LEAD-3 (See et al., 2017) 40.34 17.70 36.57 ORACLE (Liu and Lapata, 2019b) 52.59 31.24 48.87 REFRESH (Narayan et al., 2018) 40.00 18.20 36.60 LATENT (Zhang et al., 2018) 41.05 18.77 37.54 BanditSum (Dong et al., 2018) 41.50 18.70 37.60 NeuSUM (Zhou et al., 2018) 41.59 19.01 37.98 JECS (Xu and Durrett, 2019) 41.70 18.50 37.90 LSTM+PN (Zhong et al., 2019a) 41.85 18.93 38.13 HER w/o Policy (Luo et al., 2019) 41.70 18.30 37.10 HER w Policy (Luo et al., 2019) 42.30 18.90 37.60 Ext-BiLSTM 41.59 19.03 38.04 Ext-Transformer 41.33 18.83 37.65 HSG 42.31 19.51 38.74 HSG + Tri-Blocking 42.95 19.76 39.23 Table 1: Performance (Rouge) of our proposed models against recently released summarization systems on CNN/DailyMail. 5 Results and Analysis 5.1 Single-document Summarization We evaluate our single-document model on CNN/DailyMail and NYT50 and report the unigram, bigram and longest common subsequence overlap with reference summaries by R-1, R-2 and R-L. Due to the limited computational resource, we don’t apply pre-trained contextualized encoder (i.e. BERT (Devlin et al., 2018)) to our models, which we will regard as our future work. Therefore, here, we only compare with models without BERT for the sake of fairness. Results on CNN/DailyMail Table 1 shows the results on CNN/DailyMail. The first part is the LEAD-3 baseline and ORACLE upper bound, while the second part includes other summarization models. We present our models (described in Section 4.3) in the third part. Compared with ExtBiLSTM, our heterogeneous graphs achieve more than 0.6/0.51/0.7 improvements on R-1, R-2 and R-L, which indicates the cross-sentence relationships learned by our sentence-word-sentence structure is more powerful than the sequential structure. Besides, Our models also outperform ExtTransformer based on fully connected relationships. This demonstrates that our graph structures effectively prune unnecessary connections between sentences and thus improve the performance of sentence node classification. Compared with the second block of Figure 1, we observe that HETERSUMGRAPH outperforms all previous non-BERT-based summarization systems 6215 and trigram blocking leads to a great improvement on all ROUGE metrics. Among them, HER (Luo et al., 2019) is a comparable competitor to our HETERSUMGRAPH, which formulated the extractive summarization task as a contextual-bandit problem and solved it with reinforcement learning. Since the reinforcement learning and our trigram blocking plays a similar role in reorganizing sentences into a summary (Zhong et al., 2019a), we additionally compare HER without policy gradient with HETERSUMGRAPH. Our HETERSUMGRAPH achieve 0.61 improvements on R-1 over HER without policy for sentence scoring, and HETERSUMGRAPH with trigram blocking outperforms by 0.65 over HER for the reorganized summaries. Model R-1 R-2 R-L First sentence (Durrett et al., 2016) 28.60 17.30 First k words (Durrett et al., 2016) 35.70 21.60 LEAD-3 38.99 18.74 35.35 ORACLE 60.54 40.75 57.22 COMPRESS (Durrett et al., 2016) 42.20 24.90 SUMO (Liu et al., 2019) 42.30 22.70 38.60 PG* (See et al., 2017) 43.71 26.40 DRM (Paulus et al., 2017) 42.94 26.02 Ext-BiLSTM 46.32 25.84 42.16 Ext-Transformer 45.07 24.72 40.85 HSG 46.89 26.26 42.58 HSG + Tri-Blocking 46.57 25.94 42.25 Table 2: Limited-length ROUGE Recall on NYT50 test set. The results of models with * are copied from Liu and Lapata (2019b) and ’-’ means that the original paper did not report the result. Results on NYT50 Results on NYT50 are summarized in Table 2. Note that we use limited-length ROUGE recall as Durrett et al. (2016), where the selected sentences are truncated to the length of the human-written summaries and the recall scores are used instead of F1. The first two lines are baselines given by Durrett et al. (2016) and the next two lines are our baselines for extractive summarization. The second and third part report the performance of other non-BERT-based works and our models respectively. Again, we observe that our cross-sentence relationship modeling performs better than BiLSTM and Transformer. Our models also have strong advantages over other non-BERT-based approaches on NYT50. Meanwhile, we find trigram block doesn’t work as well as shown on CNN/DailyMail, and we attribute the reason to the special formation of summaries of CNN/DailyMail dataset. 4 Ablation on CNN/DailyMail In order to better understand the contribution of different modules to the performance, we conduct ablation study using our proposed HETERSUMGRAPH model on CNN/DailyMail dataset. First, we remove the filtering mechanism for low TF-IDF words and the edge weights respectively. We also remove residual connections between GAT layers. As a compensation, we concatenate the initial sentence feature after updating messages from nearby word nodes in Equal 8: H1 s = FFN [U1 s←w; H0 s]  . (13) Furthermore, we make iteration number t = 0, which deletes the word updating and use the sentence representation H1 s for classification. Finally, we remove the BiLSTM layer in the initialization of sentence nodes. As Table 3 shows, the removal of low TF-IDF words leads to increases on R-1 and R-L but drops on R-2. We suspect that filtering noisy words enable the model to better focus on useful word nodes, at the cost of losing some bigram information. The residual connection plays an important role in the combination of the original representation and the updating message from another type of nodes, which cannot be replaced by the concatenation. Besides, the introduction of edge features, word update and BiLSTM initialization for sentences also show their effectiveness. 5.2 Multi-document Summarization We first take the concatenation of the First-k sentences from each source document as the baseline and use the codes and model outputs5 released by Fabbri et al. (2019) for other models. To explore the adaptability of our model to multidocument summarization, we concatenate multisource documents to a single mega-document and apply HETERSUMGRAPH as the baseline. For comparison, we extend HETERSUMGRAPH to multi-document settings HETERDOCSUMGRAPH 4Nallapati et al. (2016) concatenate summary bullets, which are written for different parts of the article and have few overlaps with each other, as a multi-sentence summary. However, when human write summaries for the whole article (such as NYT50 and Multi-News), they will use key phrases repeatedly. This means roughly removing sentences by n-gram overlaps will lead to loss of important information. 5https://github.com/Alex-Fabbri/ Multi-News 6216 Model R-1 R-2 R-L HSG 42.31 19.51 38.74 - filter words 42.24 19.56 38.68 - edge feature 42.14 19.41 38.60 - residual connection 41.59 19.08 38.05 - sentence update 41.59 19.03 38.04 - word update 41.70 19.16 38.15 - BiLSTM 41.70 19.09 38.13 Table 3: Ablation studies on CNN/DailyMail test set. We remove various modules and explore their influence on our model. ’-’ means we remove the module from the original HETERSUMGRAPH. Note that HETERSUMGRAPH without the updating of sentence nodes is actually the Ext-BiLSTM model described in Section 4.3. as described in Section 3.5. Our results are presented in Table 4. Specifically, we observe that both of our HETERSUMGRAPH and HETERDOCSUMGRAPH outperform previous methods while HETERDOCSUMGRAPH achieves better performance improvements. This demonstrates the introduction of document nodes can better model the documentdocument relationships and is beneficial for multidocument summarization. As mentioned above, trigram blocking does not work for the Multi-News dataset, since summaries are written as a whole instead of the concatenations of summary bullets for each source document. Model R-1 R-2 R-L First-1 25.44 7.06 22.12 First-2 35.70 10.28 31.71 First-3 40.21 12.13 37.13 ORACLE 52.32 22.23 47.93 LexRank* (Erkan and Radev, 2004) 41.77 13.81 37.87 TextRank* (Mihalcea and Tarau, 2004) 41.95 13.86 38.07 MMR* (Carbonell and Goldstein, 1998) 44.72 14.92 40.77 PG† (Lebanoff et al., 2018) 44.55 15.54 40.75 BottomUp† (Gehrmann et al., 2018) 45.27 15.32 41.38 Hi-MAP† (Fabbri et al., 2019) 45.21 16.29 41.39 HSG 45.66 16.22 41.80 HSG + Tri-Blocking 44.92 15.59 40.89 HDSG 46.05 16.35 42.08 HDSG + Tri-Blocking 45.55 15.78 41.29 Table 4: Results on the test set of Multi-News. We reproduce models with ‘*’ via the released code and directly use the outputs of † provided by Fabbri et al. (2019) for evaluation. 5.3 Qualitative Analysis We further design several experiments to probe into how our HETERSUMGRAPH and HETERDOC0.4 0.6 0.8 ∆˜R (0, 1.25) (1.25, 1.5)(1.5, 1.75)(1.75, 2.0) (2.0, ∞) 30 35 Average degree of word nodes ˜R BiLSTM HSG Figure 4: Relationships between the average degree of word nodes of the document (x-axis) and ˜R, which is the mean of R-1, R-2 and R-L (lines for left y-axis), and between ∆˜R, which is the delta ˜R of HETERSUMGRAPH and Ext-BiLSTM (histograms for right y-axis). SUMGRAPH help the single- and multi-document summarization. Degree of word nodes In HETERSUMGRAPH, the degree of a word node indicates its occurrence across sentences and thus can measure the redundancy of the document to some extent. Meanwhile, words with a high degree can aggregate information from multiple sentences, which means that they can benefit more from the iteration process. Therefore, it is important to explore the influence of the node degree of words on the summarization performance. We first calculate the average degree of word nodes for each example based on the constructed graph. Then the test set of CNN/DailyMail is divided into 5 intervals based on it (x-axis in Figure 4). We evaluate the performance of HETERSUMGRAPH and Ext-BiLSTM in various parts and the mean score of R-1, R-2, R-L is drawn as lines (left y-axis ˜R). The ROUGE increases with the increasing of the average degree of word nodes in the document, which means that articles with a high redundancy are easier for neural models to summarize. To make ∆˜R between models more obvious, we draw it with histograms (right y-axis). From Figure 4, we can observe that HETERSUMGRAPH performs much better for documents with a higher average word node degree. This proves that the benefit brought by word nodes lies in the aggregation of information from sentences and the propagation of their global representations. Number of source documents We also investigate how the number of source documents influences the performance of our model. To this end, 6217 2 3 4 5 6 28 30 32 34 36 Number of source documents ˜R First-3 HSG HDSG Figure 5: Relationship between number of source documents (x-axis) and ˜R (y-axis). we divide the test set of Multi-News into different parts by the number of source documents and discard parts with less than 100 examples. Then, we take First-3 as the baseline, which concatenates the top-3 sentences of each source document as the summary. In Figure 5, we can observe that the lead baseline raises while both of our model performance degrade and finally they converge to the baseline. This is because it is more challenging for models to extract limited-number sentences that can cover the main idea of all source documents with the increasing number of documents. However, the First-3 baseline is forced to take sentences from each document which can ensure the coverage. Besides, the increase of document number enlarges the performance gap between HETERSUMGRAPH and HETERDOCSUMGRAPH. This indicates the benefit of document nodes will become more significant for more complex document-document relationships. 6 Conclusion In this paper, we propose a heterogeneous graphbased neural network for extractive summarization. The introduction of more fine-grained semantic units in the summarization graph helps our model to build more complex relationships between sentences . It is also convenient to adapt our singledocument graph to multi-document with document nodes. Furthermore, our models have achieved the best results on CNN/DailyMail compared with non-BERT-based models, and we will take the pretrained language models into account for better encoding representations of nodes in the future. Acknowledgment This work was supported by the National Natural Science Foundation of China (No. U1936214 and 61672162), Shanghai Municipal Science and Technology Major Project (No. 2018SHZDZX01) and ZJLab. References Kristjan Arumae and Fei Liu. 2018. Reinforced extractive summarization with question-focused rewards. In Proceedings of ACL 2018, Student Research Workshop, pages 105–111. Jaime G Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In SIGIR, volume 98, pages 335–336. Jianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 484–494. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Yue Dong, Yikang Shen, Eric Crawford, Herke van Hoof, and Jackie Chi Kit Cheung. 2018. Banditsum: Extractive summarization as a contextual bandit. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3739–3748. Greg Durrett, Taylor Berg-Kirkpatrick, and Dan Klein. 2016. Learning-based single-document summarization with compression and anaphoricity constraints. arXiv preprint arXiv:1603.08887. G¨unes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence research, 22:457–479. Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1074–1084, Florence, Italy. Association for Computational Linguistics. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109. 6218 Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. 2017. Neural message passing for quantum chemistry. In ICML, pages 1263–1272. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems, pages 1684–1692. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Logan Lebanoff, Kaiqiang Song, and Fei Liu. 2018. Adapting the neural encoder-decoder framework from single to multi-document summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4131–4141. Yann LeCun, L´eon Bottou, Yoshua Bengio, Patrick Haffner, et al. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324. Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram cooccurrence statistics. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. Hu Linmei, Tianchi Yang, Chuan Shi, Houye Ji, and Xiaoli Li. 2019. Heterogeneous graph attention networks for semi-supervised short text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4823– 4832. Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. Proceedings of the 6th International Conference on Learning Representations. Yang Liu and Mirella Lapata. 2019a. Hierarchical transformers for multi-document summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5070– 5081. Yang Liu and Mirella Lapata. 2019b. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3721–3731, Hong Kong, China. Association for Computational Linguistics. Yang Liu, Ivan Titov, and Mirella Lapata. 2019. Single document summarization as tree induction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1745–1755. Ling Luo, Xiang Ao, Yan Song, Feiyang Pan, Min Yang, and Qing He. 2019. Reading like her: Human reading inspired extractive summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3024–3034. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing, pages 404–411. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In Thirty-First AAAI Conference on Artificial Intelligence. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, C¸ a glar Gulc¸ehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. CoNLL 2016, page 280. Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Ranking sentences for extractive summarization with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 1747–1759. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Evan Sandhaus. 2008. The new york times annotated corpus. Linguistic Data Consortium, Philadelphia, 6(12):e26752. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1073–1083. Chuan Shi, Yitong Li, Jiawei Zhang, Yizhou Sun, and S Yu Philip. 2016. A survey of heterogeneous information network analysis. IEEE Transactions on Knowledge and Data Engineering, 29(1):17–37. 6219 Ming Tu, Guangtao Wang, Jing Huang, Yun Tang, Xiaodong He, and Bowen Zhou. 2019. Multi-hop reading comprehension across multiple documents by reasoning over heterogeneous graphs. arXiv preprint arXiv:1905.07374. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903. Danqing Wang, Pengfei Liu, Ming Zhong, Jie Fu, Xipeng Qiu, and Xuanjing Huang. 2019a. Exploring domain shift in extractive text summarization. arXiv preprint arXiv:1908.11664. Hsiu-Yi Wang, Jia-Wei Chang, and Jen-Wei Huang. 2019b. User intention-based document summarization on heterogeneous sentence networks. In International Conference on Database Systems for Advanced Applications, pages 572–587. Springer. Yang Wei. 2012. Document summarization method based on heterogeneous graph. In 2012 9th International Conference on Fuzzy Systems and Knowledge Discovery, pages 1285–1289. IEEE. Jiacheng Xu and Greg Durrett. 2019. Neural extractive text summarization with syntactic compression. arXiv preprint arXiv:1902.00863. Jiacheng Xu, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. Discourse-aware neural extractive model for text summarization. arXiv preprint arXiv:1910.14142. Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based neural multi-document summarization. arXiv preprint arXiv:1706.06681. Xingxing Zhang, Mirella Lapata, Furu Wei, and Ming Zhou. 2018. Neural latent extractive document summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 779–784. Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, and Xuan-Jing Huang. 2020. Extractive summarization as text matching. In Proceedings of the 58th Conference of the Association for Computational Linguistics. Ming Zhong, Pengfei Liu, Danqing Wang, Xipeng Qiu, and Xuan-Jing Huang. 2019a. Searching for effective neural extractive summarization: What works and what’s next. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1049–1058. Ming Zhong, Danqing Wang, Pengfei Liu, Xipeng Qiu, and Xuan-Jing Huang. 2019b. A closer look at data bias in neural extractive summarization models. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 80–89. Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural document summarization by jointly learning to score and select sentences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 654–663. A Appendices In order to select the best iteration number for HETERSUMGRAPH, we compare performances of different t on the validation set of CNN/DM. All models are trained on a single GeForce RTX 2080 Ti GPU for about 5 epochs. As Table 5 shows, our HETERSUMGRAPH has comparable results for t = 1 and t = 3. However, when the iteration number goes from 1 to 3, the time for one epoch nearly doubles. Therefore, we take t = 1 as a result of the balance of time cost and model performance. Number R-1 R-2 R-L Time t = 0 43.63 19.58 37.39 3.16h t = 1 44.26 19.97 38.03 5.04h t = 2 44.13 19.85 37.87 7.20h t = 3 44.28 19.96 37.98 8.93h Table 5: Different turns of iterative updating of sentence nodes. The experiments are performed on the validation set of CNN/DM. Time is the average time of one epoch.
2020
553
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6220–6231 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6220 Jointly Learning to Align and Summarize for Neural Cross-Lingual Summarization Yue Cao, Hui Liu, Xiaojun Wan Wangxuan Institute of Computer Technology, Peking University Center for Data Science, Peking University The MOE Key Laboratory of Computational Linguistics, Peking University {yuecao,xinkeliuhui,wanxiaojun}@pku.edu.cn Abstract Cross-lingual summarization is the task of generating a summary in one language given a text in a different language. Previous works on cross-lingual summarization mainly focus on using pipeline methods or training an endto-end model using the translated parallel data. However, it is a big challenge for the model to directly learn cross-lingual summarization as it requires learning to understand different languages and learning how to summarize at the same time. In this paper, we propose to ease the cross-lingual summarization training by jointly learning to align and summarize. We design relevant loss functions to train this framework and propose several methods to enhance the isomorphism and cross-lingual transfer between languages. Experimental results show that our model can outperform competitive models in most cases. In addition, we show that our model even has the ability to generate cross-lingual summaries without access to any cross-lingual corpus. 1 Introduction Neural abstractive summarization has witnessed rapid growth in recent years. Variants of sequenceto-sequence models have shown to obtain promising results on English (See et al., 2017) or Chinese summarization datasets. However, Cross-lingual summarization, which aims at generating a summary in one language from input text in a different language, has been rarely studied because of the lack of parallel corpora. Early researches on cross-lingual abstractive summarization are mainly based on the summarization-translation or translationsummarization pipeline paradigm and adopt different strategies to incorporate bilingual features (Leuski et al., 2003; Orasan and Chiorean, 2008; Wan et al., 2010; Wan, 2011) into the pipeline model. Recently, Shen et al. (2018) first propose a neural cross-lingual summarization system based on a large-scale corpus. They first translate the texts automatically from the source language into the target language and then use the teacher-student framework to train a cross-lingual summarization model. Duan et al. (2019) further improve this teacher-student framework by using genuine summaries paired with the translated pseudo source sentences to train the cross-lingual summarization model. Zhu et al. (2019) propose a multi-task learning framework to train a neural cross-lingual summarization model. Cross-lingual summarization is a challenging task as it requires learning to understand different languages and learning how to summarize at the same time. It would be difficult for the model to directly learn cross-lingual summarization. In this paper, we explore this question: can we ease the training and enhance the cross-lingual summarization by establishing alignment of context representations between two languages? Learning cross-lingual representations has been proven a beneficial method for cross-lingual transfer for some downstream tasks (Klementiev et al., 2012; Artetxe et al., 2018; Ahmad et al., 2019; Chen et al., 2019). The underlying idea is to learn a shared embedding space for two languages to improve the model’s ability for cross-lingual transfer. Recently, it has been shown that this method can also be applied to context representations (Aldarmaki and Diab, 2019; Schuster et al., 2019). In this paper, we show that the learning of cross-lingual representations is also beneficial for neural crosslingual summarization models. We propose a multi-task framework that jointly learns to summarize and align context-level representations. Concretely, we first integrate monolingual summarization models and cross-lingual summarization models into one unified model and then 6221 build two linear mappings to project the context representation from one language to the other. We then design several relevant loss functions to learn the mappers and facilitate the cross-lingual summarization. In addition, we propose some methods to enhance the isomorphism and cross-lingual transfer between different languages. We also show that the learning of aligned representation enables our model to generate cross-lingual summaries even in a fully unsupervised way where no parallel crosslingual data is required. We conduct experiments on several public crosslingual summarization datasets. Experiment results show that our proposed model outperforms competitive models in most cases, and our model also works on the unsupervised setting. To the best of our knowledge, we are the first to propose an unsupervised framework for learning neural crosslingual summarization. In summary, our primary contributions are as follow: • We propose a framework that jointly learns to align and summarize for neural cross-lingual summarization and design relevant loss functions to train our system. • We propose a procedure to train our crosslingual summarization model in an unsupervised way. • The experimental results show that our model outperforms competitive models in most cases, and our model has the ability to generate cross-lingual summarization even without any cross-lingual corpus. 2 Overview We show the overall framework of our proposed model in Figure 1. Our model consists of two encoders, two decoders, two linear mappers, and two discriminators. Suppose we have an English source text x = {x1, . . . , xm} and a Chinese source text y = {y1, . . . , yn}, which consist of m and n words, respectively. The English encoder φEX (res. Chinese encoder φEY) transforms x (res. y) into its context representation zx (res. zy), and the decoder φDX (res. φDY) reads the memory zx (res. zy) and generates the corresponding English summary ˜x (res. Chinese summary ˜y). The mappers MX : Zx →Zy and MY : Zy → Zx are used for transformations between zx and Figure 1: The overall framework of our proposed model. zy, and the discriminators DX and DY are used for discriminating between the encoded representations and the mapped representations. Taking English-to-Chinese summarization for example, our model generates cross-lingual summaries as follows: First we use the English encoder to get the English context representations, then we use the mapper to map English representations into Chinese space. Lastly the Chinese decoder is used to generate Chinese summaries. In Section 3, we describe the techniques we adopt to enhance the cross-lingual transferability of the model. In Section 4 and Section 5, we describe the unsupervised training objective and supervised training objective for cross-lingual summarization, respectively. 3 Model Adjustment for Cross-Lingual Transfer 3.1 Normalizing the Representations In our model, we adopt Transformer (Vaswani et al., 2017) as our encoder and decoder, which is the same with previous works (Duan et al., 2019; Zhu et al., 2019). The encoder and decoder are connected via cross-attention. The cross-attention is implemented as the following dot-product attention module: Attention (S, T) = softmax TS⊤ √dk  S (1) where S is the packed encoder-side contextual representation, T is the packed decoder-side contextual representation and dk is the model size. 6222 In the dot-product module, it would be beneficial if the contextual representations of the encoder and decoder have the same distributions. However, in the cross-lingual setting, the encoder and decoder deal with different languages and thus the distributions of the learned contextual representations may be inconsistent. This motivates us to explicitly learn alignment relationships between languages. To make the contextual representations of two languages easier to be aligned, we introduce the normalization technique into the transformer model. Normalizing the word representations has been proved an effective technique on word alignment (Xing et al., 2015). After normalization, two sets of embeddings are both located on a unit hypersphere, which makes them easier to be aligned. We achieve this by introducing the prenormalization technique and replacing the LayerNorm with ScaleNorm (Nguyen and Salazar, 2019): oℓ+1 = LayerNorm (oℓ+ Fℓ(oℓ)) ⇓ oℓ+1 = oℓ+ Fℓ(ScaleNorm (oℓ)) where Fℓis the ℓ-th layer and oℓis its input. The formula for calculating ScaleNorm is: ScaleNorm(x; g) = g · x/∥x∥ (2) where g is a hyper-parameter. An additional benefit of ScaleNorm is that after being normalized, the dot-product of two vectors u⊤v is equivalent to their cosine distance u⊤v ∥u∥∥v∥, which may benefit the attention module in Transformer. We will conduct experiments to verify this. 3.2 Enhancing the Isomorphism A key assumption of aligning the representations of two languages is the isomorphism of learned monolingual representations. Some researchers show that the isomorphism assumption weakens when two languages are etymologically distant (Søgaard et al., 2018; Patra et al., 2019). However, Ormazabal et al. (2019) show that this limitation is due to the independent training of two separate monolingual embeddings, and they suggest to jointly learn cross-lingual representations on monolingual corpora. Inspired by Ormazabal et al. (2019), we take the following approaches to address the isomorphism problem. First, we combine the English and Chinese summarization corpora and build a unified vocabulary. Second, we share encoders and decoders in our model. Sharing encoders and decoders can also enforce the model to learn shared contextual representations across languages. For the shared decoder, to indicate the target language, we set the first token of the decoder to specify the language the module is operating with. Third, we train several monolingual summarization steps before cross-lingual training, as shown in the first line in Alg. 1. The pre-trained monolingual summarization steps also allow the model to learn easier monolingual summarization first, then further learn cross-lingual summarization, which may reduce the training difficulty. 4 Unsupervised Training Objective We describe the objective of unsupervised crosslingual summarization in this section. The whole training procedure can be found in Alg. 1. Summarization Loss Given an English textsummary pair x and x′, we use the encoder φEX and the decoder φDX to generate the hypothetical English summary ˜x that maximizes the output summary probability given the source text: ˜x = arg max¯x P(¯x | x). We adopt maximum loglikelihood training with cross-entropy loss between hypothetical summary ˜x and gold summary x′: zx = φEX (x), ˜x = φDX (zx) LsummX (x, x′)=− T X t=1 log P x′ t | ˜x<t, zx  (3) where T is the length of x′. The Chinese summarization loss LsummY is similarly defined for the Chinese encoder φEY and decoder φDY. Generative and Discriminative Loss Given an English source text x and a Chinese source text y, we use the encoder φEX and φEY to obtain the contextual representations zx = {zx1, . . . , zxm} and zy = {zy1, . . . , zyn}, respectively. For Zhto-En summarization, we use the mapper MY to map zy into the English context space: zy→x = MY(zy). We hope the mapped distribution zy→x and the real English distribution zx could be as similar as possible such that the English decoder can deal with cross-lingual summarization just like dealing with monolingual summarization. To learn this mapping, we introduce two discriminators and adopt the adversarial training (Goodfellow et al., 2014) technique. We optimize the 6223 mappers at the sentence-level1 rather than wordlevel, which is inspired by Aldarmaki and Diab (2019) where they found learning the aggregate mapping can yield a more optimal solution compared to word-level mapping. Concretely, we first average the contextual representations: ˜zy→x = 1 n n X i=1 (zy→x)i , ˜zx = 1 m m X i=1 zxi (4) Then we train the discriminator DX to discriminate between ˜zy→x and ˜zx using the following discriminative loss: LdisX (˜zy→x, ˜zx) = −log PDX (src = 0|˜zy→x) −log PDX (src = 1|˜zx) (5) where PDX (src |˜z) is the predicted probability of DX to distinguish whether ˜z is coming from the real English representation (src = 1) or from the mapper MY (src = 0). In our framework, the encoder φEX and mapper MY together make up the generator. The generator tries to generate representations which would confuse the discriminator, so its objective is to maximize the discriminative loss in Eq. 5. Alternatively, we train the generator to minimize the following generative loss: LgenY (˜zy→x, ˜zx) = −log PDX (src = 1|˜zy→x) −log PDX (src = 0|˜zx) (6) The discriminative loss LdisY (˜zx→y, ˜zy) for DY, generative loss LgenX (˜zx→y, ˜zy) for φEY and MX are similarly defined. Notice that since we use vector averaging and adopt the linear transformation, it does not matter whether we apply the linear mapping before or after averaging the contextual representations, and the learned sentence-level mappers can be directly applied to word-level mappings. Cycle Reconstruction Loss Theoretically, if we do not add additional constraints, there exist infinite mappings that can align the distribution of ˜zx and ˜zy, and thus the learned mappers may be invalid. In order to learn better mappings, we introduce the cycle reconstruction loss and back-translation loss to enhance them. 1The “sentence” in this paper can refer to the sequence containing multiple sentences. Given zx, we first use MX to map it to the Chinese space, and then use MY to map it back: zx→y = MX (zx), ˆzx = MY(zx→y) (7) We force zx and ˆzx to be consistent, constrained by the following cycle reconstruction loss: LcycX (zx, ˆzx) = ∥zx −ˆzx∥ (8) The cycle reconstruction loss LcycY for zy and ˆzy is similarly defined. Back-Translation Loss The cycle-reconstructed representation ˆzx in Eq. 8 can be regarded as augmented data to train the decoder, which is similar to the back-translation in the Neural Machine Translation area. Concretely, we use the decoder φDX to read ˆzx and generate the hypothetical summary ˆx. The back-translation loss is defined as the cross-entropy loss between ˆx and gold summary x′: ˆx = φDX (ˆzx) LbackX (ˆzx) = − T X t=1 log P x′ t | ˆx<t, ˆzx  (9) The back-translation loss enhances not only the generation ability of the decoder but also the effectiveness of the mapper. The back-translation loss LbackY for ˆzy is similarly defined. Total Loss The total loss for optimizing the encoder, decoder, and mapper of the English side is weighted sum of the above losses: LX = LsummX + λ1LgenX + λ2LcycX + λ3LbackX (10) where λ1, λ2, and λ3 is the weighted hyperparameters. The total loss of the Chinese side is similarly defined, and the complete loss of our model is the sum of English loss and Chinese loss: L = LX + LY (11) The total loss for optimizing the discriminators is: Ldis = LdisX + LdisY (12) 5 Supervised Training Objective The supervised training objective contains the same summarization loss in unsupervised training objective (Eq. 3). In addition, it has X-summarization loss and reconstruction loss. 6224 Algorithm 1 Cross-lingual summarization Input: English summarization data X and Chinese summarization data Y. 1: Pre-train English and Chinese monolingual summarization several epochs on X and Y. 2: for i = 0 to max iters do 3: Sample a batch from X and a batch from Y 4: if unsupervised then 5: for k = 0 to dis iters do 6: Update DX and DY on Ldis in Eq.5. 7: (a) Update φEX , φEY, φDX , and φDY 8: on Lsumm in Eq. 3. 9: (b) Update φEX , φEY, MX , and MY 10: on Lgen in Eq. 6. 11: (c) Update φEX , φEY, MX , and MY 12: on Lcyc in Eq. 8. 13: (d) Update MX , MY, φDX , and φDY 14: on Lback in Eq. 9. 15: else if supervised then 16: (a) Upate φEX , φEY, φDX , and φDY 17: on Lsumm in Eq. 3. 18: (b) Update φEX , φEY, φDX , and φDY 19: on Lxsumm in Eq. 13. 20: (c) Update φEX , φEY, MX , and MY 21: on Lrec in Eq. 14. X-Summarization Loss Given a parallel English source text x and Chinese summary y′. We use φEX , MX , and φDY to generate the hypothetical Chinese summary ˜y, then train them with crossentropy loss: zx=φEX(x), zx→y=MX(zx), ˜y=φDY(zx→y) LxsummX (x, y′) = − T X t=1 log P y′ t | ˜y<t, x  (13) The X-summarization loss for a Chinese text y and English summary x′ is similarly defined. Reconstruction Loss Since the cross-lingual summarization corpora are constructed by translating the texts to the other language, the English texts and the Chinese texts are parallel to each other. We can build a reconstruction loss to align the sentence representation for the parallel English and Chinese texts. Specifically, supposing x and y are parallel source English and Chinese texts, we first use φEX and φEY to obtain contextual representations zx and zy, respectively. Then we average the contextual representations to get their sentence representations and use the mappers to map them into the other language. Since the English and Chinese texts are translations to each other, the semantics of their sentence representations should be the same. Thus we design the following reconstruction loss: ˜zx = 1 m m X i=1 zxi, ˜zy→x = 1 n n X i=1 (zy→x)i LrecX (˜zx, ˜zy→x) = ∥˜zx −˜zy→x∥ (14) and LrecY is similarly defined. Notice that the generative and discriminative loss, cycle-construction loss, and back-translation loss are unnecessary here because we can directly use aligned source text with objective 14 to align the context representations. Total Loss The total loss for training the English side is: LX = LxsummX + λ1LsummX + λ2LrecX (15) where λ1 and λ2 is the weighted hyper-parameters. The total loss of the Chinese side is similarly defined. 6 Experiments 6.1 Experiment Settings We conduct experiments on English-to-Chinese (En-to-Zh) and Chinese-to-English (Zh-to-En) summarizations. Following Duan et al. (2019), we translate the source texts to the other language to form the (pseudo) parallel corpus. Since they do not release their training data, we translate the source text ourselves through the Google translation service. Notice that Zhu et al. (2019) translate the summaries rather than source texts. Since Duan et al. (2019) use Gigaword and DUC2004 datasets for experiments while Zhu et al. (2019) use LCSTS and CNN/DM for experiments, we conduct experiments on all the 4 datasets. When comparing with Duan et al. (2019) and Zhu et al. (2019), we use the same number of translated parallel data for training. Due to limited computing resources, we only do unsupervised experiments on gigaword and LCSTS datasets. Notice that the test sets provided by Zhu et al. (2019) are unprocessed, therefore we have to process the test samples they provided ourselves. 6225 6.2 Dataset Gigaword English Gigaword corpus (Napoles et al., 2012) contains 3.80M training pairs, 2K validation pairs, and 1,951 test pairs. We use the human-translated Chinese source sentences provided by (Duan et al., 2019) to do Zh-to-En tests. DUC2004 DUC2004 corpus only contains test sets. We use the model trained on gigaword corpus to generate summaries on DUC2004 test sets. We use the 500 human-translated test samples provided by (Duan et al., 2019) to do Zh-to-En tests. LCSTS LCSTS (Hu et al., 2015) is a Chinese summarization corpus, which contains 2.40M training pairs, 10,666 validation pairs, and 725 test pairs. We use 3K cross-lingual test samples provided by Zhu et al. (2019) to do Zh-to-En tests. CNN/DM CNN/DM (Hermann et al., 2015) contains 287.2K training pairs, 13.3K validation pairs, and 11.5K test pairs. We use the 3K cross-lingual test samples provided by Zhu et al. (2019) to do En-to-Zh cross-lingual tests. 6.3 Evaluation Metrics We use ROUGE-1 (unigram), ROUGE-2 (bigram), and ROUGE-L (LCS) F1 scores as the evaluation metrics, which are most commonly used evaluation metrics in the summarization task. 6.4 Competitive Models For unsupervised cross-lingual summarization, we set the following baselines: • Unified It jointly trains English and Chinese monolingual summarizations in a unified model and uses the first token of the decoder to control whether it generates Chinese or English summaries. • Unified+CLWE It builds a unified model and adopts pre-trained unsupervised cross-lingual word embeddings. The cross-lingual word embeddings are obtained via projecting embeddings from source language to target language. We use Vecmap2 to learn the cross-lingual word embeddings. For supervised cross-lingual summarization, we compare our model with (Shen et al., 2018), (Duan et al., 2019), and Zhu et al. (2019). We also consider the following baselines for comparison: 2https://github.com/artetxem/vecmap • Pipe-TS The Pipe-TS baseline first uses a Transformer-based translation model to translate the source text to the other language, then uses a monolingual summarization model to generate summaries. To make this baseline stronger, we replace the translation model with the Google translation system and name it as Pipe-TS*. • Pipe-ST The Pipe-ST baseline first uses a monolingual summarization model to generate the summaries, then uses a translation model to translate the summaries to the other language. We replace the translation model with the Google translation system as PipeST*. • Pseudo The Pseudo baseline directly trains a cross-lingual summarization model by using the pseudo parallel cross-lingual summarization data. • XLM Pretraining This method is proposed by Lample and Conneau (2019), where they pretrain the encoder and decoder on largescale multilingual text using causal language modeling (CLM), masked language modeling (MLM), and translation language modeling (TLM) tasks. 3 6.5 Implementation Details For transformer architectures, we use the same configuration as Vaswani et al. (2017), where the number of layers, model hidden size, feed-forward hidden size, and the number of heads are 6, 512, 1024, and 8, respectively. We set g = √dmodel = √ 512 in ScaleNorm. The mapper is a linear layer with a hidden size of 512, and the discriminator is a two-layer linear layer with a hidden size of 2048. We use the NLTK4 tool to process English texts and use jieba5 tool to process Chinese texts. The vocabulary size of English words and Chinese words are 50,000 and 80,000 respectively. We set λ1 = 1, λ2 = 5, λ3 = 2 in unsupervised training and λ1 = 0.5, λ2 = 5 in supervised training according to the performance of the validation set. We set dis iters = 5 in Alg. 1. 3This baseline was suggested by the reviewers, and the results are only for reference since it additionally uses a lot of pre-training text. 4https://github.com/nltk/nltk 5https://github.com/fxsjy/jieba 6226 Method Zh-to-En En-to-Zh Gigaword DUC2004 LCSTS CNN/DM R1 R2 RL R1 R2 RL R1 R2 RL R1 R2 RL Pipe-TS 22.27 6.58 20.53 21.29 5.96 17.99 27.26 10.41 21.72 Pipe-ST 28.27 11.90 26.50 25.73 8.19 21.60 36.48 18.87 31.44 25.95 11.01 23.29 Pipe-TS* 22.52 6.67 20.76 21.83 6.11 18.42 29.29 11.09 23.18 Pipe-ST* 29.56 12.50 26.42 26.66 8.51 22.37 38.26 19.56 32.93 27.82 11.78 24.97 Pseudo* 30.93 13.25 27.29 27.03 8.49 23.08 38.61 19.76 34.63 35.81 14.96 32.07 (Shen et al., 2018) 21.5 6.6 19.6 19.3 4.3 17.0 (Duan et al., 2019) 30.1 12.2 27.7 26.0 8.0 23.1 (Zhu et al., 2019) 40.34 22.65 36.39 38.25 20.20 34.76 (Zhu et al., 2019) w/ LDC 40.25 22.58 36.21 40.23 22.32 36.59 XLM Pretraining 32.28 14.03 28.19 28.27 9.40 23.78 42.75 22.80 38.73 39.11 17.57 34.14 Ours 32.04 13.60 27.91 27.25 8.71 23.36 40.97 23.20 36.96 38.12 16.76 33.86 Table 1: Rouge F1 scores (%) on cross-lingual summarization tests. “XLM Pretraining” and “Zhu et al. (2019) w/ LDC” use additional training data. Our model significantly (p < 0.01) outperforms all pipeline methods and pseudo-based methods. We use Adam optimizer (Kingma and Ba, 2014) with β = (0.9, 0.98) for optimization. We set the learning rate to 3e −4 and adopt the warm-up learning rate (Goyal et al., 2017) for the first 2,000 steps, the initial warm-up learning is set to 1e −7. We adopt the dropout technique and set the dropout rate to 0.2. 7 Results and Analysis 7.1 Unsupervised Cross-Lingual Summarization The experiment results of unsupervised crosslingual summarization are shown in Table 2, and it can be seen that our model significantly outperforms all baselines by a large margin. By training a unified model of all languages, the model’s crosslingual transferability is still poor, especially for the gigaword dataset. Incorporating cross-lingual word embeddings into the unified model can improve the performance, but the improvement is limited. We think this is due to that the cross-lingual word embeddings learned by Vecmap cannot leverage the contextual information. Due to space limitations, we present case studies in the Appendix. After checking the generated summaries of the two baseline models, we find that they can generate readable texts, but the generated texts are far away from the theme of the source text. This indicates that the encoder and decoder of these baselines have a large gap, such that the decoder cannot understand the output of the encoder. We also find that summaries generated by our model are obviously more relevant, demonstrating that aligned representations between languages are helpful. But we can also see that there is still a gap beMethod LCSTS Gigaword R1 R2 RL R1 R2 RL Unified 13.52 1.35 10.02 5.25 0.87 2.09 Unified+CLWE 14.02 1.49 12.10 6.51 1.07 2.92 Ours 20.11 5.46 16.07 13.75 4.29 11.82 Table 2: Rouge F1 scores (%) on unsupervised crosslingual summarization tests. Our model outperforms all baselines significantly (p < 0.01). tween our unsupervised results (Table 2) and supervised results (Table 1), indicating that our model has room for improvement. 7.2 Supervised Cross-Lingual Summarization The experiment results of supervised cross-lingual summarization are shown in Table 1. Due to the lack of corpus for training Chinese long document summarization model, we do not experiment with the Pipe-TS model on the CNN/DM dataset. By comparing our results with pipeline-based or pseudo baselines, we can find that our model outperforms all these baselines in all cases. Our model achieves an improvement of 0∼3 Rouge scores over the Pseudo model trained directly with translated parallel cross-lingual corpus, and 1.5∼4 Rouge-1 scores over those pipeline models. We also observe that models using the Google translation system all perform better than models using the Transformer-based translation system. This may because the Transformer-based translation system will bring some “UNK” tokens, and the transformer-based translation system trained by ourselves does not perform as well as the Google translation system. In addition, Pipe-ST models perform better than Pipe-TS models, which is con6227 Method Info. ↑ Con. ↑ Flu. ↑ Reference 3.60 3.50 3.80 PipeST* 3.56 3.51 4.00 PipeTS* 3.37 3.80 3.81 Pseudo 3.27 3.81 3.89 Ours (supervised) 3.56 3.93 3.94 Ours (unsupervised) 2.18 3.34 2.87 Table 3: Results of the human evaluation on the gigaword dataset. Method Info. ↑ Con. ↑ Flu. ↑ Reference 3.58 3.57 4.21 PipeST* 3.38 3.45 4.13 PipeTS* 3.38 3.93 3.78 Pseudo 3.46 3.90 4.05 Ours (supervised) 3.55 4.03 4.13 Table 4: Results of the human evaluation on the CNN/DM dataset. sistent with the conclusions of previous work. This is because (1) the translation process may discard some informative clauses, (2) the domain of the translation corpus is different from the domain of summarization corpus, which will bring the domain discrepancy problem to the translation process, and (3) the translated texts are often “translationese” (Graham et al., 2019). The Pseudo model performs better than Pipe-TS models but performs similarly as Pipe-ST models. By comparing our results with others, we can find that our model outperforms Shen et al. (2018) and Duan et al. (2019) on both gigaword and DUC2004 test sets, and it outperforms Zhu et al. (2019) on the LCSTS dataset. But our Rouge scores are lower than Zhu et al. (2019) on the CNN/DM dataset, especially the Rouge-2 score. However, our model performs worse than pretrained models. 7.3 Human Evaluation The human evaluation was also performed. Since we cannot get the summaries generated by other models, we only compare with our baselines in the human evaluation. We randomly sample 50 examples from the gigaword (Zh-to-En) test set and 20 examples from the CNN/DM (En-to-Zh) test set. We ask five volunteers to evaluate the quality of the generated summaries from the following three aspects: (1) Informative: how much does the generated summaries cover the key content of the source text? (2) Conciseness: how concise are the generated summaries? (3) Fluency: how fluent are the generated summaries? The scores are Method Gigaword CNN/DM R1 R2 RL R1 R2 RL Ours (supervised) 32.04 13.60 27.91 38.12 16.76 33.86 w/o summ. loss 30.36*12.84*26.41*36.37*15.97*32.11* w/o mappers 31.95 13.46 27.88 38.28 16.73 33.93 w/o ScaleNorm 31.27* 13.29 27.22*37.01*16.30*32.87* w/o pre. steps 31.33* 13.30 27.35*37.23* 16.39 33.01* Unshare enc/dec30.10*12.71*26.28*35.93*15.86*31.82* Table 5: Results of ablation tests in supervised setting. Statistically significant improvement (p < 0.01) over the complete model are marked with *. between 1-5, with 5 being the best. We average the scores and show the results in Table 3 and Table 4. Our model exceeds all baselines in informative and conciseness scores, but get a slightly lower fluency score than Pipe-ST*. We think this is because the Google translation system has the ability to identify grammatical errors and generate fluent sentences. 7.4 Ablation Tests To study the importance of different components of our model, we also test some variants of our model. For supervised training, we set variants of (1) without (monolingual) summarization loss, (2) without mappers6, (3) replace ScaleNorm with LayerNorm, (4) without pre-trained monolingual steps, and (5) unshare the encoder and decoder. For unsupervised training, we additionally set variants without cyc-reconstruction loss or back-translation loss. The results of ablation tests of supervised and unsupervised cross-lingual summarization are shown in Table 5 and Table 6, respectively. It seems that the role of mappers does not seem obvious in the case of supervised training. We speculate that this may be due to the joint training of monolingual and cross-lingual summarizations, and directly constraining the context representations before mapping can also yield shared (aligned) representations. But mappers are crucial for unsupervised cross-lingual summarization. For supervised cross-lingual summarization, except for mappers, all components contribute to the improvement of the performance. The performance decreases after removing any of the components. For unsupervised cross-lingual summarization, all components contribute to the improvement of the performance and the mappers and shared encoder/decoder are key components. 6In this case, we directly constrain the parallel zx and zy to be the same. 6228 Method LCSTS Gigaword R1 R2 RL R1 R2 RL Ours (unsupervised) 20.10 5.46 16.07 13.75 4.29 11.82 w/o mappers 14.79*2.29*12.36* 6.26* 1.02*3.11* w/o cyc. loss 17.51*4.70*13.95* 7.21* 1.31*4.04* w/o back. loss 19.37 5.23 15.44 13.20 4.11 11.27 w/o ScaleNorm 19.24* 5.21 15.37*13.15* 4.08 11.21 w/o pre. steps 19.70 5.24 15.72 13.13 4.10 10.91 Unshare enc/dec 12.28*0.97*10.37* 4.88* 0.82*1.91* Table 6: Results of the ablation tests of unsupervised cross-lingual summarization. Statistically significant improvement (p < 0.01) over the complete model are marked with *. 8 Related Work 8.1 Cross-Lingual Summarization Early researches on cross-lingual abstractive summarization are mainly based on the monolingual summarization methods and adopt different strategies to incorporate bilingual information into the pipeline model (Leuski et al., 2003; Orasan and Chiorean, 2008; Wan et al., 2010; Wan, 2011; Yao et al., 2015). Recently, some neural cross-lingual summarization systems have been proposed for cross-lingual summarization (Shen et al., 2018; Duan et al., 2019; Zhu et al., 2019). The first neural-based crosslingual summarization system was proposed by Shen et al. (2018), where they first translate the source texts from the source language to the target language to form the pseudo training samples. A teacher-student framework is adopted to achieve end-to-end cross-lingual summarization. Duan et al. (2019) adopt a similar framework to train the cross-lingual summarization model, but they translate the summaries rather than source texts to strengthen the teacher network. Zhu et al. (2019) propose a multi-task learning framework by jointly training cross-lingual summarization and monolingual summarization (or machine translation). They also released an English-Chinese cross-lingual summarization corpus with the aid of online translation services. 8.2 Learning Cross-Lingual Representations Learning cross-lingual representations is a beneficial method for cross-lingual transfer. Conneau et al. (2017) use adversarial networks to learn mappings between languages without supervision. They show that their method works very well for word translation, even for some distant language pairs like English-Chinese. Lample et al. (2018) learn word mappings between languages to build an initial unsupervised machine translation model, and then perform iterative backtranslation to fine-tune the model. Aldarmaki and Diab (2019) propose to directly map the averaged embeddings of aligned sentences in a parallel corpus, and achieve better performances than wordlevel mapping in some cases. 9 Conclusions In this paper, we propose a framework that jointly learns to align and summarize for neural crosslingual summarization. We design training objectives for supervised and unsupervised cross-lingual summarizations, respectively. We also propose methods to enhance the isomorphism and crosslingual transfer between languages. Experimental results show that our model outperforms supervised baselines in most cases and outperforms unsupervised baselines in all cases. Acknowledgments This work was supported by National Natural Science Foundation of China (61772036), Tencent AI Lab Rhino-Bird Focused Research Program (No.JR201953), and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers for their helpful comments. Xiaojun Wan is the corresponding author. References Wasi Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng. 2019. On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2440–2452. Hanan Aldarmaki and Mona Diab. 2019. Contextaware cross-lingual mapping. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3906–3911. Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural machine translation. In International Conference on Learning Representations. 6229 Xilun Chen, Ahmed Hassan, Hany Hassan, Wei Wang, and Claire Cardie. 2019. Multi-source cross-lingual model transfer: Learning what to share. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3098–3112. Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087. Xiangyu Duan, Mingming Yin, Min Zhang, Boxing Chen, and Weihua Luo. 2019. Zero-shot crosslingual abstractive sentence summarization through teaching generation and attention. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3162–3172. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680. Priya Goyal, Piotr Doll´ar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. 2017. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677. Yvette Graham, Barry Haddow, and Philipp Koehn. 2019. Translationese in machine translation evaluation. arXiv preprint arXiv:1906.09833. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in neural information processing systems, pages 1693–1701. Baotian Hu, Qingcai Chen, and Fangze Zhu. 2015. Lcsts: A large scale chinese short text summarization dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1967–1972. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of COLING 2012, pages 1459–1474. Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. Advances in Neural Information Processing Systems (NeurIPS). Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, et al. 2018. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039–5049. Anton Leuski, Chin-Yew Lin, Liang Zhou, Ulrich Germann, Franz Josef Och, and Eduard Hovy. 2003. Cross-lingual c* st* rd: English access to hindi information. ACM Transactions on Asian Language Information Processing (TALIP), 2(3):245–269. Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction, pages 95–100. Association for Computational Linguistics. Toan Q Nguyen and Julian Salazar. 2019. Transformers without tears: Improving the normalization of self-attention. arXiv preprint arXiv:1910.05895. Constantin Orasan and Oana Andreea Chiorean. 2008. Evaluation of a cross-lingual romanian-english multi-document summariser. In LREC 2008. Aitor Ormazabal, Mikel Artetxe, Gorka Labaka, Aitor Soroa, and Eneko Agirre. 2019. Analyzing the limitations of cross-lingual word embedding mappings. arXiv preprint arXiv:1906.05407. Barun Patra, Joel Ruben Antony Moniz, Sarthak Garg, Matthew R Gormley, and Graham Neubig. 2019. Bilingual lexicon induction with semi-supervision in non-isometric embedding spaces. arXiv preprint arXiv:1908.06625. Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. Cross-lingual alignment of contextual word embeddings, with applications to zeroshot dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1599–1613. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083. Shi-qi Shen, Yun Chen, Cheng Yang, Zhi-yuan Liu, and Mao-song Sun. 2018. Zero-shot cross-lingual neural headline generation. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 26(12):2319–2327. Anders Søgaard, Sebastian Ruder, and Ivan Vuli´c. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778– 788. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. 6230 Xiaojun Wan. 2011. Using bilingual information for cross-language document summarization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1546–1555. Xiaojun Wan, Huiying Li, and Jianguo Xiao. 2010. Cross-language document summarization based on machine translation quality prediction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 917–926. Svante Wold, Kim Esbensen, and Paul Geladi. 1987. Principal component analysis. Chemometrics and intelligent laboratory systems, 2(1-3):37–52. Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006–1011. Jin-ge Yao, Xiaojun Wan, and Jianguo Xiao. 2015. Phrase-based compressive cross-language summarization. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 118–127. Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang, Shaonan Wang, and Chengqing Zong. 2019. Ncls: Neural cross-lingual summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3045–3055. A Visualization We use the PCA (Wold et al., 1987) algorithm to visualize the pre- and post-aligned context representations of our model in Figure 2. The left picture shows the original distribution of two languages, and the right picture shows the distribution after we map Chinese representations to English. Figure 2 reveals that the representations of the two languages are originally separated but become aligned after our proposed procedure, which demonstrates that our proposed alignment procedure is effective. B Case Studies We show four cases of Chinese-to-English summarization in Table 7. Since most of the summaries generated by other unsupervised baselines are meaningless (e.g., far away from the theme of the source text, all tokens are “UNK” and so on), we don’t show their results here. 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 (a) Pre-alignment 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 (b) Post-alignment Figure 2: Visualization of the pre- and post-aligned context representations. The blue dots are English context representations and the red dots are Chinese context representations. 6231 Text: 野生动物专家称,除非政府发起全面打击猖獗偷猎的战争,否则印度大象将会灭绝. (wildlife experts say indian elephants will go extinct unless government launches full-scale war against sting poaching) Reference: india elephant may be facing extinction : experts by <unk> Pipe-ST: wildfile expert says indian elephant will die out Pipe-TS: india to kill elephants in war on poaching Pseudo: indian elephants face extinction unless government launches war against poaching Ours (supervised): india elephants face extinction over poaching Ours (unsupervised): india elephants rise to extinct Text: 一份媒体报道,一名日本男子周日在台湾上吊自杀,原因是亚洲冠军没能在世界杯上 获得一场胜利. (report claimed that a japanese man hanged himself in taiwan on sunday because the asian champion failed to win a victory at the word cup) Reference: fan hangs himself for nation ’s dismal world cup performance Pipe-ST: japanese man hangs himself in taiwan as asian champion fails to win Pipe-TS: world cup winner commits suicide Pseudo: man commits suicide because of world cup failure Ours (supervised): man hangs himself after world cup failure Ours (unsupervised): failed to secure a single champions Text: 澳大利亚教练罗比-迪恩斯对上周末在这里对阵意大利的袋鼠测试前被新西兰击败的 球队做了八次改变. (australian coach robbie deans made eight changes to a team defeated by new zealand before the kangaroo test against italy here last weekend) Reference: <unk> : deans rings changes for aussies azzurri test Pipe-ST: australian coach changes team eight times before kangaroo test Pipe-TS: australia make eight changes for italy test Pseudo: deans makes eight changes for new zealand Ours (supervised): australia make eight changes ahead of italy test Ours (unsupervised): weekend ahead of wallabies test against Italy here Text: 凯尔特人中场保罗哈特利在经历了一个星期痛苦的欧洲之旅后,于周五为苏格兰足球 发起了一场激情的辩护. (celtic midfielder paul hartley launched a passionate defence for scottish football on friday after a week of painful european travel) Reference: football : scottish football is not a joke says celtic star Pipe-ST: paul hartley launches passionate defense Pipe-TS: celtic ’s hartley launches passionate defense Pseudo: celtic ’s hartley launches passionate defense for scotland Ours (supervised): celtic ’s hartley defends scottish football Ours (unsupervised): celtic midfielder paul week of european misery Table 7: Case studies of Chinese-to-English summarization.
2020
554
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6232–6243 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6232 Leveraging Graph to Improve Abstractive Multi-Document Summarization Wei Li1, Xinyan Xiao1∗, Jiachen Liu1, Hua Wu1, Haifeng Wang1, Junping Du2 1Baidu Inc., Beijing, China 2Beijing University of Posts and Telecommunications {liwei85,xiaoxinyan,liujiachen,wu hua,wanghaifeng}@baidu.com [email protected] Abstract Graphs that capture relations between textual units have great benefits for detecting salient information from multiple documents and generating overall coherent summaries. In this paper, we develop a neural abstractive multidocument summarization (MDS) model which can leverage well-known graph representations of documents such as similarity graph and discourse graph, to more effectively process multiple input documents and produce abstractive summaries. Our model utilizes graphs to encode documents in order to capture cross-document relations, which is crucial to summarizing long documents. Our model can also take advantage of graphs to guide the summary generation process, which is beneficial for generating coherent and concise summaries. Furthermore, pre-trained language models can be easily combined with our model, which further improve the summarization performance significantly. Empirical results on the WikiSum and MultiNews dataset show that the proposed architecture brings substantial improvements over several strong baselines. 1 Introduction Multi-document summarization (MDS) brings great challenges to the widely used sequence-tosequence (Seq2Seq) neural architecture as it requires effective representation of multiple input documents and content organization of long summaries. For MDS, different documents may contain the same content, include additional information, and present complementary or contradictory information (Radev, 2000). So different from single document summarization (SDS), cross-document links are very important in extracting salient information, detecting redundancy and generating overall coherent summaries for MDS. Graphs that capture ∗Corresponding author. relations between textual units have great benefits to MDS, which can help generate more informative, concise and coherent summaries from multiple documents. Moreover, graphs can be easily constructed by representing text spans (e.g. sentences, paragraphs etc.) as graph nodes and the semantic links between them as edges. Graph representations of documents such as similarity graph based on lexical similarities (Erkan and Radev, 2004) and discourse graph based on discourse relations (Christensen et al., 2013), have been widely used in traditional graph-based extractive MDS models. However, they are not well studied by most abstractive approaches, especially the end-to-end neural approaches. Few work has studied the effectiveness of explicit graph representations on neural abstractive MDS. In this paper, we develop a neural abstractive MDS model which can leverage explicit graph representations of documents to more effectively process multiple input documents and distill abstractive summaries. Our model augments the end-toend neural architecture with the ability to incorporate well-established graphs into both the document representation and summary generation processes. Specifically, a graph-informed attention mechanism is developed to incorporate graphs into the document encoding process, which enables our model to capture richer cross-document relations. Furthermore, graphs are utilized to guide the summary generation process via a hierarchical graph attention mechanism, which takes advantage of the explicit graph structure to help organize the summary content. Benefiting from the graph modeling, our model can extract salient information from long documents and generate coherent summaries more effectively. We experiment with three types of graph representations, including similarity graph, topic graph and discourse graph, which all significantly improve the MDS performance. 6233 Additionally, our model is complementary to most pre-trained language models (LMs), like BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019) and XLNet (Yang et al., 2019b). They can be easily combined with our model to process much longer inputs. The combined model adopts the advantages of both our graph model and pre-trained LMs. Our experimental results show that our graph model significantly improves the performance of pre-trained LMs on MDS. The contributions of our paper are as follows: • Our work demonstrates the effectiveness of graph modeling in neural abstractive MDS. We show that explicit graph representations are beneficial for both document representation and summary generation. • We propose an effective method to incorporate explicit graph representations into the neural architecture, and an effective method to combine pre-trained LMs with our graph model to process long inputs more effectively. • Our model brings substantial improvements over several strong baselines on both WikiSum and MultiNews dataset. We also report extensive analysis results, demonstrating that graph modeling enables our model process longer inputs with better performance, and graphs with richer relations are more beneficial for MDS.1 2 Related Work 2.1 Graph-based MDS Most previous MDS approaches are extractive, which extract salient textual units from documents based on graph-based representations of sentences. Various ranking methods have been developed to rank textual units based on graphs to select most salient ones for inclusion in the final summary. Erkan and Radev (2004) propose LexRank to compute sentence importance based on a lexical similarity graph of sentences. Mihalcea and Tarau (2004) propose a graph-based ranking model to extract salient sentences from documents. Wan (2008) further proposes to incorporate documentlevel information and sentence-to-document relations into the graph-based ranking process. A series of variants of the PageRank algorithm has been 1Codes and results are in: https://github.com/ PaddlePaddle/Research/tree/master/NLP/ ACL2020-GraphSum further developed to compute the salience of textual units recursively based on various graph representations of documents (Wan and Xiao, 2009; Cai and Li, 2012). More recently, Yasunaga et al. (2017) propose a neural graph-based model for extractive MDS. An approximate discourse graph is constructed based on discourse markers and entity links. The salience of sentences is estimated using features from graph convolutional networks (Kipf and Welling, 2016). Yin et al. (2019) also propose a graph-based neural sentence ordering model, which utilizes entity linking graph to capture the global dependencies between sentences. 2.2 Abstractive MDS Abstractive MDS approaches have met with limited success. Traditional approaches mainly include: sentence fusion-based (Banerjee et al., 2015; Filippova and Strube, 2008; Barzilay and McKeown, 2005; Barzilay, 2003), information extractionbased (Li, 2015; Pighin et al., 2014; Wang and Cardie, 2013; Genest and Lapalme, 2011; Li and Zhuge, 2019) and paraphrasing-based (Bing et al., 2015; Berg-Kirkpatrick et al., 2011; Cohn and Lapata, 2009). More recently, some researches parse the source text into AMR representation and then generate summary based on it (Liao et al., 2018). Although neural abstractive models have achieved promising results on SDS (See et al., 2017; Paulus et al., 2018; Gehrmann et al., 2018; Celikyilmaz et al., 2018; Li et al., 2018a,b; Narayan et al., 2018; Yang et al., 2019a; Sharma et al., 2019; Perez-Beltrachini et al., 2019), it’s not straightforward to extend them to MDS. Due to the lack of sufficient training data, earlier approaches try to simply transfer SDS model to MDS task (Lebanoff et al., 2018; Zhang et al., 2018; Baumel et al., 2018) or utilize unsupervised models relying on reconstruction objectives (Ma et al., 2016; Chu and Liu, 2019). Later, Liu et al. (2018) propose to construct a large scale MDS dataset (namely WikiSum) based on Wikipedia, and develop a Seq2Seq model by considering the multiple input documents as a concatenated flat sequence. Fan et al. (2019) further propose to construct a local knowledge graph from documents and then linearize the graph into a sequence to better sale Seq2Seq models to multidocument inputs. Fabbri et al. (2019) also introduce a middle-scale (about 50K) MDS news dataset (namely MultiNews), and propose an end-to-end model by incorporating traditional MMR-based 6234 Hierarchical Graph Attention Add & Normalize Feed Forward Add & Normalize Feed Forward POSITIONAL ENCODING ŏ Masked Self-Attention Add & Normalize Graph Decoding Layer token1 END Graph Encoding Layer first paragraph last paragraph Graph-informed Self-Attention Add & Normalize Feed Forward Add & Normalize Feed Forward PARAGRAPH POSITION ENCODING TOKEN POSITION ENCODING ŏ ŏ Transformer Transformer ŏ ŏ Figure 1: Illustration of our model, which follows the encoder-deocder architecture. The encoder is a stack of transformer layers and graph encoding layers, while the decoder is a stack of graph decoding layers. We incorporate explicit graph representations into both the graph encoding layers and graph decoding layers. extractive model with a standard Seq2Seq model. The above Seq2Seq models haven’t study the importance of cross-document relations and graph representations in MDS. Most recently, Liu and Lapata (2019a) propose a hierarchical transformer model to utilize the hierarchical structure of documents. They propose to learn cross-document relations based on selfattention mechanism. They also propose to incorporate explicit graph representations into the model by simply replacing the attention weights with a graph matrix, however, it doesn’t achieve obvious improvement according to their experiments. Our work is partly inspired by this work, but our approach is quite different from theirs. In contrast to their approach, we incorporate explicit graph representations into the encoding process via a graphinformed attention mechanism. Under the guidance of explicit relations in graphs, our model can learn better and richer cross-document relations, thus achieves significantly better performance.We also leverage the graph structure to guide the summary decoding process, which is beneficial for long summary generation. Additionally, we combine the advantages of pretrained LMs into our model. 2.3 Summarization with Pretrained LMs Pretrained LMs (Peters et al., 2018; Radford et al.; Devlin et al., 2019; Dong et al., 2019; Sun et al., 2019) have recently emerged as a key technology for achieving impressive improvements in a wide variety of natural language tasks, including both language understanding and language generation (Edunov et al., 2019; Rothe et al., 2019). Liu and Lapata (2019b) attempt to incorporate pre-trained BERT encoder into SDS model and achieves significant improvements. Dong et al. (2019) further propose a unified LM for both language understanding and language generation tasks, which achieves state-of-the-art results on several generation tasks including SDS. In this work, we propose an effective method to combine pretrained LMs with our graph model and make them be able to process much longer inputs effectively. 3 Model Description In order to process long source documents more effectively, we follow Liu and Lapata (2019a) in splitting source documents into multiple paragraphs by line-breaks. Then the graph representation of documents is constructed over paragraphs. For example, a similarity graph can be built based on cosine similarities between tf-idf representations of paragraphs. Let G denotes a graph representation matrix of the input documents, where G[i][j] indicates the relation weights between paragraph Pi and Pj. Formally, the task is to generate the summary S of the document collection given L input paragraphs P1, . . . , PL and their graph representation G. Our model is illustrated in Figure 1, which follows the encoder-decoder architecture (Bahdanau et al., 2015). The encoder is composed of several token-level transformer encoding layers and paragraph-level graph encoding layers which can be stacked freely. The transformer encoding layer follows the Transformer architecture introduced in Vaswani et al. (2017), encoding contextual information for tokens within each paragraph. The 6235 graph encoding layer extends the Transformer architecture with a graph attention mechanism to incorporate explicit graph representations into the encoding process. Similarly, the decoder is composed of a stack of graph decoding layers. They extend the Transformer with a hierarchical graph attention mechanism to utilize explicit graph structure to guide the summary decoding process. In the following, we will focus on the graph encoding layer and graph decoding layer of our model. 3.1 Graph Encoding Layer As shown in Figure 1, based on the output of the token-level transformer encoding layers, the graph encoding layer is used to encode all documents globally. Most existing neural work only utilizes attention mechanism to learn latent graph representations of documents where the graph edges are attention weights (Liu and Lapata, 2019a; Niculae et al., 2018; Fernandes et al., 2018). However, much work in traditional MDS has shown that explicit graph representations are very beneficial to MDS. Different types of graphs capture different kinds of semantic relations (e.g. lexical relations or discourse relations), which can help the model focus on different facets of the summarization task. In this work, we propose to incorporate explicit graph representations into the neural encoding process via a graph-informed attention mechanism. It takes advantage of the explicit relations in graphs to learn better inter-paragraph relations. Each paragraph can collect information from other related paragraphs to capture global information from the whole input. Graph-informed Self-attention The graphinformed self-attention extends the self-attention mechanism to consider the pairwise relations in explicit graph representations. Let xl−1 i denotes the output of the (l −1)-th graph encoding layer for paragraph Pi, where x0 i is just the input paragraph vector. For each paragraph Pi, the context representation ui can be computed as a weighted sum of linearly transformed paragraph vectors: αij =softmax(eij + ℜij) eij = (xl−1 i WQ)(xl−1 j WK)T √dhead ui = L X j=1 αij(xl−1 j WV ) (1) where WK, WQ and WV ∈Rd∗d are parameter weights. etj denotes the latent relation weight between paragraph Pi and Pj. The main difference of our graph-informed self-attention is the additional pairwise relation bias ℜij, which is computed as a Gaussian bias of the weights of graph representation matrix G: ℜij = −(1 −G[i][j])2 2σ2 (2) where σ denotes the standard deviation that represents the influence intensity of the graph structure. We set it empirically by tuning on the development dataset. The gaussian bias Rij ∈(−inf, 0] measures the tightness between the paragraphs Pi and Pj. Due to the exponential operation in softmax function, the gaussian bias approximates to multiply the latent attention distribution by a weight ∈(0, 1]. In our graph-attention mechanism, the term eij in Equation 1 keeps the ability to model latent dependencies between any two paragraphs, and the term ℜij incorporates explicit graph representations as prior constraints into the encoding process. This way, our model can learn better and richer inter-paragraph relations to obtain more informative paragraph representations. Then, a two-layer feed-forward network with ReLU activation function and a high-way layer normalization are applied to obtain the vector of each paragraph xl i: pl i =Wo2ReLU(Wo1(ui + xl−1 i )) xl i =LayerNorm(pl i + xl−1 i ) (3) where Wo1 ∈Rdff∗d and Wo2 ∈Rd∗dff are learnable parameters, dff is the hidden size of the feedforward layer. 3.2 Graph Decoding Layer Graphs can also contribute to the summary generation process. The relations between textual units can help to generate more coherent or concise summaries. For example, Christensen et al. (2013) propose to leverage an approximate discourse graph to help generate coherent extractive summaries. The discourse relations between sentences are used to help order summary sentences. In this work, we propose to incorporate explicit graph structure into the end-to-end summary decoding process. Graph edges are used to guide the summary generation process via a hierarchical graph attention, which 6236 is composed by a global graph attention and a local normalized attention. As other components in the graph decoding layer are similar to the Transformer architecture, we focus on the extension of hierarchical graph attention. Global Graph Attention The global graph attention is developed to capture the paragraph-level context information in the encoder part. Different from the context attention in Transformer, we utilize the explicit graph structure to regularize the attention distributions so that graph representations of documents can be used to guide the summary generation process. Let yl−1 t denotes the output of the (l −1)-th graph decoding layer for the t-th token in the summary. We assume that each token will align with several related paragraphs and one of them is at the central position. Since the prediction of the central position depends on the corresponding query token, we apply a feed-forward network to transform yl−1 t into a positional hidden state, which is then mapped into a scalar st by a linear projection: st = L ∗sigmoid(U T p tanh(Wpyl−1 t )) (4) where Wp ∈Rd∗d and Up ∈Rd denote weight matrix. st indicates the central position of paragraphs that are mapped by the t-th summary token. With the central position, other paragraphs are determined by the graph structure. Then an attention distribution over all paragraphs under the regularization of the graph structure can be obtained: βtj =softmax(etj −(1 −G[st][j])2 2σ2 ) (5) where etj denotes the attention weight between token vector yl−1 t and paragraph vector xj, which is computed similarly to Equation 1. The global context vector can be obtained as a weighted sum of paragraph vectors: gt = PL j=1 βtjxj In our decoder, graphs are also modeled as a Gaussian bias. Different from the encoder, a central mapping position is firstly decided and then graph relations corresponding to that position are used to regularize the attention distributions βtj. This way, the relations in graphs are used to help align the information between source input and summary output globally, thus guiding the summary decoding process. Local Normalized Attention Then, a local normalized attention is developed to capture the tokenlevel context information within each paragraph. The local attention is applied to each paragraph independently and normalized by the global graph attention. This way, our model can process longer inputs effectively. Let γt,ji denotes the local attention distributions of the t-th summary token over the i-th token in the j-th input paragraph, the normalized attention is computed by: ˆγt,ji = γt,jiβtj (6) and the local context vector can be computed as a weighted sum of token vectors in all paragraphs: lt = PL j=1 Pn k=1 ˆγt,jixji Finally, the output of the hierarchical graph attention component is computed by concatenating and linearly transforming the global and local context vector: dt = U T d [gt, lt] (7) where Ud ∈R2d∗d is a weight matrix. Through combining the local and global context, the decoder can utilize the source information more effectively. 3.3 Combined with Pre-trained LMs Our model can be easily combined with pre-trained LMs. Pre-trained LMs are mostly based on sequential architectures which are more effective on short text. For example, both BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) are pre-trained with maximum 512 tokens. Liu and Lapata (2019b) propose to utilize BERT on single document summarization tasks. They truncate the input documents to 512 tokens on most tasks. However, thanks to the graph modeling, our model can process much longer inputs. A natural idea is to combine our graph model with pretrained LMs so as to combine the advantages of them. Specifically, the tokenlevel transformer encoding layer of our model can be replaced by a pre-trained LM like BERT. In order to take full advantage of both our graph model and pre-trained LMs, the input documents are formatted in the following way: [CLS] first paragraph [SEP] [CLS] second paragraph [SEP] . . . [CLS] last paragraph [SEP] Then they are encoded by a pre-trained LM, and the output vector of the “[CLS]” token is used as the vector of the corresponding paragraph. Finally, all paragraph vectors are fed into our graph encoder to learn global representations. Our graph decoder is further used to generate the summaries. 6237 4 Experiments 4.1 Experimental Setup Graph Representations We experiment with three well-established graph representations: similarity graph, topic graph and discourse graph. The similarity graph is built based on tf-idf cosine similarities between paragraphs to capture lexical relations. The topic graph is built based on LDA topic model (Blei et al., 2003) to capture topic relations between paragraphs. The edge weights are cosine similarities between the topic distributions of the paragraphs. The discourse graph is built to capture discourse relations based on discourse markers (e.g. however, moreover), co-reference and entity links as in Christensen et al. (2013). Other types of graphs can also be used in our model. In our experiments, if not explicitly stated, we use the similarity graph by default as it has been most widely used in previous work. WikiSum Dataset We follow Liu et al. (2018) and Liu and Lapata (2019a) in treating the generation of lead Wikipedia sections as a MDS task. The source documents are reference webpages of the Wikipedia article and top 10 search results returned by Google, while the summary is the Wikipedia article’s first section. As the source documents are very long and messy, they are split into multiple paragraphs by line-breaks. Further, the paragraphs are ranked by the title and top ranked paragraphs are selected as input for MDS systems. We directly utilize the ranking results from Liu and Lapata (2019a) and top-40 paragraphs are used as source input. The average length of each paragraph and the target summary are 70.1 tokens and 139.4 tokens, respectively. For the seq2seq baselines, paragraphs are concatenated as a sequence in the ranking order, and lead tokens are used as input. The dataset is split into 1,579,360 instances for training, 38,144 for validation and 38,205 for testing, similar to Liu and Lapata (2019a). We build similarity graph representations over paragraphs on this dataset. MultiNews Dataset Proposed by Fabbri et al. (2019), MultiNews dataset consists of news articles and human-written summaries. The dataset comes from a diverse set of news sources (over 1500 sites). Different from the WikiSum dataset, MultiNews is more similar to the traditional MDS dataset such as DUC, but is much larger in scale. As in Fabbri et al. (2019), the dataset is split into 44,972 instances for training, 5,622 for validation and 5,622 for testing. The average length of source documents and output summaries are 2103.5 tokens and 263.7 tokens, respectively. For the seq2seq baselines, we truncate N input documents to L tokens by taking the first L/N tokens from each source document. Then we concatenate the truncated source documents into a sequence by the original order. Similarly, for our graph model, the input documents are truncated to M paragraphs by taking the first M/N paragraphs from each source document. We build all three types of graph representations on this dataset to explore the influence of graph types on MDS. Training Configuration We train all models with maximum likelihood estimation, and use label smoothing (Szegedy et al., 2016) with smoothing factor 0.1. The optimizer is Adam (Kingma and Ba, 2015) with learning rate 2, β1=0.9 and β2=0.998. We also apply learning rate warmup over the first 8,000 steps and decay as in (Vaswani et al., 2017). Gradient clipping with maximum gradient norm 2.0 is also utilized during training. All models are trained on 4 GPUs (Tesla V100) for 500,000 steps with gradient accumulation every four steps. We apply dropout with probability 0.1 before all linear layers in our models. The number of hidden units in our models is set as 256, the feed-forward hidden size is 1,024, and the number of heads is 8. The number of transformer encoding layers, graph encoding layers and graph decoding layers are set as 6, 2 and 8, respectively. The parameter σ is set as 2.0 after tuning on the validation dataset. During decoding, we use beam search with beam size 5 and length penalty with factor 0.6. Trigram blocking is used to reduce repetitions. For the models with pretrained LMs, we apply different optimizers for the pretrained part and other parts as in (Liu and Lapata, 2019b). Two Adam optimizers with β1=0.9 and β2=0.999 are used for the pretrained part and other parts, respectively. The learning rate and warmup steps for the pretrained part are set as 0.002 and 20000, while 0.2 and 10000 for other parts. Other model configurations are in line with the corresponding pretrained LMs. We choose the base version of BERT, RoBERTa and XLNet in our experiments. 4.2 Evaluation Results We evaluate our models on both the WikiSum and MultiNews datasets to validate the efficiency of them on different types of corpora. The summa6238 Model R-1 R-2 R-L Lead 38.22 16.85 26.89 LexRank 36.12 11.67 22.52 FT 40.56 25.35 34.73 BERT+FT 41.49 25.73 35.59 XLNet+FT 40.85 25.29 35.20 RoBERTa+FT 42.05 27.00 36.56 T-DMCA 40.77 25.60 34.90 HT 41.53 26.52 35.76 GraphSum 42.63 27.70 36.97 GraphSum+RoBERTa 42.99 27.83 37.36 Table 1: Evaluation results on the WikiSum test set using ROUGE F1. R-1, R-2 and R-L are abbreviations for ROUGE-1, ROUGE-2 and ROUGE-L, respectively. rization quality is evaluated using ROUGE F1 (Lin and Och, 2004). We report unigram and bigram overlap (ROUGE-1 and ROUGE-2) between system summaries and gold references as a means of assessing informativeness, and the longest common subsequence (ROUGE-L2) as a means of accessing fluency. Results on WikiSum Table 6 summarizes the evaluation results on the WikiSum dataset. Several strong extractive baselines and abstractive baselines are also evaluated and compared with our models. The first block in the table shows the results of extractive methods Lead and LexRank (Erkan and Radev, 2004). The second block shows the results of abstractive methods: (1) FT (Flat Transformer), a transformer-based encoderdecoder model on a flat token sequence; (2) TDMCA, the best performing model of Liu et al. (2018); (3) HT (Hierarchical Transformer), a model with hierarchical transformer encoder and flat transformer decoder, proposed by Liu and Lapata (2019a). We report their results following Liu and Lapata (2019a). The last block shows the results of our models, which are feed with 30 paragraphs (about 2400 tokens) as input. The results show that all abstractive models outperform the extractive ones. Compared with FT, T-DMCA and HT, our model GraphSum achieves significant improvements on all three metrics, which demonstrates the effectiveness of our model. Furthermore, we develop several strong base2For fair comparison with previous work (Liu and Lapata, 2019a; Liu et al., 2018), we report the summary-level ROUGEL results on both the two datasets. The sentence-level ROUGEL results are reported in the Appendix. Model R-1 R-2 R-L Lead 41.24 12.91 18.84 LexRank 41.01 12.69 18.00 PG-BRNN 43.77 15.38 20.84 HiMAP 44.17 16.05 21.38 FT 44.32 15.11 20.50 RoBERTa+FT 44.26 16.22 22.37 HT 42.36 15.27 22.08 GraphSum 45.02 16.69 22.50 G.S.(Similarity)+RoBERTa 45.93 17.33 23.33 G.S.(Topic)+RoBERTa 46.07 17.42 23.21 G.S.(Discourse)+RoBERTa 45.87 17.56 23.39 Table 2: Evaluation results on the MultiNews test set. We report the summary-level ROUGE-L value. The results of different graph types are also compared. lines which combine the Flat Transformer with pre-trained LMs. We replace the encoder of FT by the base versions of pre-trained LMs, including BERT+FT, XLNet+FT and RoBERTa+FT. For them, the source input is truncated to 512 tokens 3. The results show that the pre-trained LMs significantly improve the summarization performance. As RoBERTa boosts the summarization performance most significantly, we also combine it with our GraphSum model, namely GraphSum+RoBERTa 4. The results show that GraphSum+RoBERTa further improves the summarization performance on all metrics, demonstrating that our graph model can be effectively combined with pre-trained LMs. The significant improvements over RoBERTa+FT also demonstrate the effectiveness of our graph modeling even with pre-trained LMs. Results on MultiNews Table 7 summarizes the evaluation results on the MultiNews dataset. Similarly, the first block shows two popular extractive baselines, and the second block shows several strong abstractive baselines. We report the results of Lead, LexRank, PG-BRNN, HiMAP and FT following Fabbri et al. (2019). The last block shows the results of our models. The results show that our model GraphSum consistently outperforms all baselines, which further demonstrate the effectiveness of our model on different types of corpora. We also compare the performance of RoBERTa+FT and GraphSum+RoBERTa, which show that our model significantly improves all metrics. 3Longer inputs don’t achieve obvious improvements. 4As XLNet and BERT achieve worse results than RoBERTa, we only report the results of GraphSum+RoBERTa 6239 Len Model R-1 R-2 R-L 500 HT 41.08 25.83 35.25 GraphSum 41.55 26.24 35.59 ∇ +0.47 +0.41 +0.34 800 HT 41.41 26.46 35.79 GraphSum 41.70 26.87 36.10 ∇ +0.29 +0.41 +0.31 1600 HT 41.53 26.52 35.76 GraphSum 42.48 27.52 36.66 ∇ +0.95 +1.00 +0.90 2400 HT 41.68 26.53 35.73 GraphSum 42.63 27.70 36.97 ∇ +0.95 +1.17 +1.24 3000 HT 41.71 26.58 35.81 GraphSum 42.36 27.47 36.65 ∇ +0.65 +0.89 +0.84 Table 3: Comparison of different input length on the WikiSum test set using ROUGE F1. ∇indicates the improvements of GraphSum over HT. The above evaluation results on both WikiSum and MultiNews dataset both validate the effectiveness of our model. The proposed method to modeling graph in end-to-end neural model greatly improves the performance of MDS. 4.3 Model Analysis We further analyze the effects of graph types and input length on our model, and validate the effectiveness of different components of our model by ablation studies. Effects of Graph Types To study the effects of graph types, the results of GraphSum+RoBERTa with similarity graph, topic graph and discourse graph are compared on the MultiNews test set. The last block in Table 7 summarizes the comparison results, which show that the topic graph achieves better performance than similarity graph on ROUGE-1 and ROUGE-2, and the discourse graph achieves the best performance on ROUGE-2 and ROUGEL. The results demonstrate that graphs with richer relations are more helpful to MDS. Effects of Input Length Different lengths of input may affect the summarization performance seriously for Seq2Seq models, so most of them restrict the length of input and only feed the model with hundreds of lead tokens. As stated by Liu and Lapata (2019a), the FT model achieves the best performance when the input length is set to 800 Model Rouge-1 Rouge-2 Rouge-L GraphSum 42.63 27.70 36.97 w/o graph dec 42.06 27.13 36.33 w/o graph enc 40.61 25.90 35.26 Table 4: Ablation study on the WikiSum test set. tokens, while longer input hurts performance. To explore the effectiveness of our GraphSum model on different length of input, we compare it with HT on 500, 800, 1600, 2400 and 3000 tokens of input respectively. Table 3 summarizes the comparison results, which show that our model outperforms HT on all length of input. More importantly, the advantages of our model on all three metrics tend to become larger as the input becomes longer. The results demonstrate that modeling graph in the end-to-end model enables our model process much longer inputs with better performance. Ablation Study Table 4 summarizes the results of ablation studies aiming to validate the effectiveness of individual components. Our experiments confirmed that incorporating well-known graphs into the encoding process by our graph encoder (see w/o graph enc) and utilizing graphs to guide the summary decoding process by our graph decoder (w/o graph dec) are both beneficial for MDS. 4.4 Human Evaluation In addition to the automatic evaluation, we also access system performance by human evaluation. We randomly select 50 test instances from the WikiSum test set and 50 from the MultiNews test set, and invite 3 annotators to access the outputs of different models independently. Annotators access the overall quality of summaries by ranking them taking into account the following criteria: (1) Informativeness: does the summary convey important facts of the input? (2) Fluency: is the summary fluent and grammatical? (3) Succinctness: does the summary avoid repeating information? Annotators are asked to ranking all systems from 1(best) to 5 (worst). Ranking could be the same for different systems if they have similar quality. For example, the ranking of five systems could be 1, 2, 2, 4, 5 or 1, 2, 3, 3, 3. All systems get score 2, 1, 0, -1, -2 for ranking 1, 2, 3, 4, 5 respectively. The rating of each system is computed by averaging the scores on all test instances. Table 5 summarizes the comparison results of five systems. Both the percentage of ranking results 6240 Model 1 2 3 4 5 Rating FT 0.18 0.21 0.23 0.16 0.22 -0.03∗ R.B.+FT 0.32 0.22 0.17 0.19 0.10 0.49∗ HT 0.21 0.32 0.12 0.15 0.20 0.19∗ GraphSum 0.42 0.30 0.17 0.10 0.01 1.02 G.S.+R.B. 0.54 0.24 0.10 0.08 0.04 1.16 Table 5: Ranking results of system summaries by human evaluation. 1 is the best and 5 is the worst. The larger rating denotes better summary quality. R.B. and G.S. are the abbreviations of RoBERTa and GraphSum, respectively. ∗indicates the overall ratings of the corresponding model are significantly (by Welch’s t-test with p < 0.01) outperformed by our models GraphSum and GraphSum+RoBERTa. and overall ratings are reported. The results demonstrate that GraphSum and GraphSum+RoBERTa are able to generate higher quality summaries than other models. Specifically, the summaries generated by GraphSum and GraphSum+RoBERTa usually contains more salient information, and are more fluent and concise than other models. The human evaluation results further validates the effectiveness of our proposed models. 5 Conclusion In this paper we explore the importance of graph representations in MDS and propose to leverage graphs to improve the performance of neural abstractive MDS. Our proposed model is able to incorporate explicit graph representations into the document encoding process to capture richer relations within long inputs, and utilize explicit graph structure to guide the summary decoding process to generate more informative, fluent and concise summaries. We also propose an effective method to combine our model with pre-trained LMs, which further improves the performance of MDS significantly. Experimental results show that our model outperforms several strong baselines by a wide margin. In the future we would like to explore other more informative graph representations such as knowledge graphs, and apply them to further improve the summary quality. Acknowledgments This work was supported by the National Key Research and Development Project of China (No. 2018AAA0101900). References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations. Siddhartha Banerjee, Prasenjit Mitra, and Kazunari Sugiyama. 2015. Multi-document abstractive summarization using ilp based multi-sentence compression. In Proceedings of 24th International Joint Conference on Artificial Intelligence. Regina Barzilay. 2003. Information Fusion for Multidocument Summerization: Paraphrasing and Generation. Ph.D. thesis, Columbia University. Regina Barzilay and Kathleen R McKeown. 2005. Sentence fusion for multidocument news summarization. Computational Linguistics, 31(3):297–328. Tal Baumel, Matan Eyal, and Michael Elhadad. 2018. Query focused abstractive summarization: Incorporating query relevance, multi-document coverage, and summary length constraints into seq2seq models. arXiv preprint arXiv:1801.07704. Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, volume 1, pages 481–490. Association for Computational Linguistics. Lidong Bing, Piji Li, Yi Liao, Wai Lam, Weiwei Guo, and Rebecca J Passonneau. 2015. Abstractive multidocument summarization via phrase selection and merging. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 1, pages 1587– 1597. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993–1022. Xiaoyan Cai and Wenjie Li. 2012. Mutually reinforced manifold-ranking based relevance propagation model for query-focused multi-document summarization. IEEE Transactions on Audio, Speech, and Language Processing, 20(5):1597–1607. Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1, pages 1662– 1675. Janara Christensen, Stephen Soderland, Oren Etzioni, et al. 2013. Towards coherent multi-document summarization. In Proceedings of the 2013 conference of the North American chapter of the association for 6241 computational linguistics: Human language technologies, pages 1163–1173. Eric Chu and Peter Liu. 2019. Meansum: a neural model for unsupervised multi-document abstractive summarization. In International Conference on Machine Learning, pages 1223–1232. Trevor Anthony Cohn and Mirella Lapata. 2009. Sentence compression as tree transduction. Journal of Artificial Intelligence Research, 34:637–674. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1, pages 4171–4186. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), pages 13042–13054. Sergey Edunov, Alexei Baevski, and Michael Auli. 2019. Pre-trained language model representations for language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1, pages 4052– 4059. G¨unes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence research, 22:457–479. Alexander R Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R Radev. 2019. Multi-news: a largescale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1074–1084. Angela Fan, Claire Gardent, Chlo´e Braud, and Antoine Bordes. 2019. Using local knowledge graph construction to scale seq2seq models to multi-document inputs. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4184–4194. Patrick Fernandes, Miltiadis Allamanis, and Marc Brockschmidt. 2018. Structured neural summarization. In Proceedings of the 7th International Conference on Learning Representations. Katja Filippova and Michael Strube. 2008. Sentence fusion via dependency graph compression. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 177–185. Association for Computational Linguistics. Sebastian Gehrmann, Yuntian Deng, and Alexander M Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4098–4109. Pierre-Etienne Genest and Guy Lapalme. 2011. Framework for abstractive summarization using text-totext generation. In Proceedings of the Workshop on Monolingual Text-To-Text Generation, pages 64–73. Association for Computational Linguistics. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Logan Lebanoff, Kaiqiang Song, and Fei Liu. 2018. Adapting the neural encoder-decoder framework from single to multi-document summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4131–4141. Wei Li. 2015. Abstractive multi-document summarization with semantic information extraction. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1908– 1913. Wei Li, Xinyan Xiao, Yajuan Lyu, and Yuanzhuo Wang. 2018a. Improving neural abstractive document summarization with explicit information selection modeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1787–1796. Wei Li, Xinyan Xiao, Yajuan Lyu, and Yuanzhuo Wang. 2018b. Improving neural abstractive document summarization with structural regularization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4078– 4087. Wei Li and Hai Zhuge. 2019. Abstractive multidocument summarization based on semantic link network. IEEE Transactions on Knowledge and Data Engineering. Kexin Liao, Logan Lebanoff, and Fei Liu. 2018. Abstract meaning representation for multi-document summarization. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1178–1190. Chin-Yew Lin and Franz Josef Och. 2004. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, pages 605–612. Association for Computational Linguistics. 6242 Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In Proceedings of the 6th International Conference on Learning Representations. Yang Liu and Mirella Lapata. 2019a. Hierarchical transformers for multi-document summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5070– 5081. Yang Liu and Mirella Lapata. 2019b. Text summarization with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3728–3738. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Shulei Ma, Zhi-Hong Deng, and Yunlun Yang. 2016. An unsupervised multi-document summarization framework based on neural document model. In Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers, pages 1514–1523. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 404–411. Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807. Vlad Niculae, Andr´e FT Martins, and Claire Cardie. 2018. Towards dynamic computation graphs via sparse latent structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 905–911. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In Proceedings of the 6th International Conference on Learning Representations. Laura Perez-Beltrachini, Yang Liu, and Mirella Lapata. 2019. Generating summaries with topic templates and structured convolutional decoders. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5107–5116. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1, pages 2227–2237. Daniele Pighin, Marco Cornolti, Enrique Alfonseca, and Katja Filippova. 2014. Modelling events through memory-based, open-ie patterns for abstractive summarization. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 1, pages 892–901. Dragomir Radev. 2000. A common theory of information fusion from multiple text sources step one: cross-document structure. In 1st SIGdial workshop on Discourse and dialogue. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. 2019. Leveraging pre-trained checkpoints for sequence generation tasks. arXiv preprint arXiv:1907.12461. Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1073–1083. Eva Sharma, Luyang Huang, Zhe Hu, and Lu Wang. 2019. An entity-driven framework for abstractive summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 3278– 3289. Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2019. Ernie 2.0: A continual pre-training framework for language understanding. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Xiaojun Wan. 2008. An exploration of document impact on graph-based multi-document summarization. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 755–762. Association for Computational Linguistics. Xiaojun Wan and Jianguo Xiao. 2009. Graph-based multi-modality learning for topic-focused multidocument summarization. In Proceedings of the 6243 21st International Joint Conference on Artificial Intelligence. Lu Wang and Claire Cardie. 2013. Domainindependent abstract generation for focused meeting summarization. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1395–1405. Wenmian Yang, Weijia Jia, Wenyuan Gao, Xiaojie Zhou, and Yutao Luo. 2019a. Interactive variance attention based online spoiler detection for time-sync comments. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 1241–1250. ACM. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019b. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems (NIPS 2019), pages 5754–5764. Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based neural multi-document summarization. In Proceedings of the 21st Conference on Computational Natural Language Learning, pages 452–462. Yongjing Yin, Linfeng Song, Jinsong Su, Jiali Zeng, Chulun Zhou, and Jiebo Luo. 2019. Graph-based neural sentence ordering. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, pages 5387–5393. Jianmin Zhang, Jiwei Tan, and Xiaojun Wan. 2018. Towards a neural network approach to abstractive multi-document summarization. arXiv preprint arXiv:1804.09010. A Appendix We report the sentence-level ROUGE-L evaluation results of our models on both the two datasets, so that future work can compare with them conveniently. Model R-1 R-2 R-L RoBERTa+FT 42.05 27.00 40.05 GraphSum 42.63 27.70 40.13 GraphSum+RoBERTa 42.99 27.83 40.97 Table 6: Evaluation results on the WikiSum test set with sentence-level ROUGE-L value. Model R-1 R-2 R-L RoBERTa+FT 44.26 16.22 40.64 GraphSum 45.02 16.69 41.11 G.S.(Similarity)+RoBERTa 45.93 17.33 42.02 G.S.(Topic)+RoBERTa 46.07 17.42 42.22 G.S.(Discourse)+RoBERTa 45.87 17.56 42.00 Table 7: Evaluation results on the MultiNews test set with sentence-level ROUGE-L value.
2020
555
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6244–6254 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6244 Multi-Granularity Interaction Network for Extractive and Abstractive Multi-Document Summarization Hanqi Jin, Tianming Wang, Xiaojun Wan Wangxuan Institute of Computer Technology, Peking University Center for Data Science, Peking University The MOE Key Laboratory of Computational Linguistics, Peking University {jinhanqi,wangtm,wanxiaojun}@pku.edu.cn Abstract In this paper, we propose a multi-granularity interaction network for extractive and abstractive multi-document summarization, which jointly learn semantic representations for words, sentences, and documents. The word representations are used to generate an abstractive summary while the sentence representations are used to produce an extractive summary. We employ attention mechanisms to interact between different granularity of semantic representations, which helps to capture multi-granularity key information and improves the performance of both abstractive and extractive summarization. Experiment results show that our proposed model substantially outperforms all strong baseline methods and achieves the best results on the Multi-News dataset. 1 Introduction Document summarization aims at producing a fluent, condensed summary for given documents. Single document summarization has shown promising results with sequence-to-sequence models that encode a source document and then decode it into a summary (See et al., 2017; Paulus et al., 2018; Gehrmann et al., 2018; C¸ elikyilmaz et al., 2018). Multi-document summarization requires producing a summary from a cluster of thematically related documents, where the given documents complement and overlap each other. Multi-document summarization involves identifying important information and filtering out redundant information from multiple input sources. There are two primary methodologies for multidocument summarization: extractive and abstractive. Extractive methods directly select important sentences from the original, which are relatively simple. Cao et al. (2015) rank sentences with a recursive neural network. Yasunaga et al. (2017) employ a Graph Convolutional Network (GCN) to incorporate sentence relation graphs to improve the performance for the extractive summarization. Abstractive methods can generate new words and new sentences, but it is technically more difficult than extractive methods. Some works on multidocument summarization simply concatenate multiple source documents into a long flat sequence and model multi-document summarization as a long sequence-to-sequence task (Liu et al., 2018; Fabbri et al., 2019). However, these approaches don’t take the hierarchical structure of document clusters into account, while the too-long input often leads to the degradation in document summarization (Cohan et al., 2018; Liu and Lapata, 2019). Recently, hierarchical frameworks have shown their effectiveness on multi-document summarization (Zhang et al., 2018; Liu and Lapata, 2019). These approaches usually use multiple encoders to model hierarchical relationships in the discourse structure, but other methods to incorporate the structural semantic knowledge have not been explored. The combination of extractive and abstractive has been explored in single document summarization. Chen and Bansal (2018) use the extracted sentences as the input of the abstractive summarization. Subramanian et al. (2019) concatenate the extracted summary to the original document as the input of the abstractive summarization. In this work, we treat documents, sentences, and words as the different granularity of semantic units, and connect these semantic units within a three-granularity hierarchical relation graph. With the multi-granularity hierarchical structure, we can unify extractive and abstractive summarization into one architecture simultaneously. Extractive summarization operates on sentence-granularity and directly supervises the sentence representations while abstractive summarization operates on wordgranularity and directly supervises the word repre6245 sentations. We propose a novel multi-granularity interaction network to enable the supervisions to promote the learning of all granularity representations. We employ the attention mechanism to encode the relationships between the same semantic granularity and hierarchical relationships between the different semantic granularity, respectively. And we use a fusion gate to integrate the various relationships for updating the semantic representations. The decoding part consists of a sentence extractor and a summary generator. The sentence extractor utilizes the sentence representations to select sentences, while the summary generator utilizes the word representations to generate a summary. The two tasks are trained in a unified architecture to promote the recognition of important information simultaneously. We evaluate our model on the recently released Multi-News dataset and our proposed architecture brings substantial improvements over several strong baselines. We explore the influence of semantic units with different granularity, and the ablation study shows that joint learning of extractive and abstractive summarization in a unified architecture improves the performance. In summary, we make the following contributions in this paper: • We establish multi-granularity semantic representations for documents, sentences, and words, and propose a novel multi-granularity interaction network to encode multiple input documents. • Our approach can unify the extractive and abstractive summarization into one architecture with interactive semantic units and promote the recognition of important information in different granularities. • Experimental results on the Multi-News dataset show that our approach substantially outperforms several strong baselines and achieves state-of-the-art performance. Our code is publicly available at https://github. com/zhongxia96/MGSum. 2 Related Work The methods for multi-document summarization can generally be categorized to extractive and abstractive. The extractive methods produce a summary by extracting and merging sentences from the input documents, while the abstractive methods generate a summary using arbitrary words and expressions based on the understanding of the documents. Due to the lack of available training data, most previous multi-document summarization methods were extractive (Erkan and Radev, 2004; Christensen et al., 2013; Yasunaga et al., 2017). Since the neural abstractive models have achieved promising results on single-document summarization (See et al., 2017; Paulus et al., 2018; Gehrmann et al., 2018; C¸ elikyilmaz et al., 2018), some works trained abstractive summarization models on a single document dataset and adjusted the model to adapt the multi-document summarization task. Zhang et al. (2018) added a document set encoder into the single document summarization framework and tuned the pre-trained model on the multi-document summarization dataset. Lebanoff et al. (2018) combined an extractive summarization algorithm (MMR) for sentence extraction to reweight the original sentence importance distribution learned in the single document abstractive summarization model. Recently, two large scale multi-document summarization datasets have been proposed, one for very long input, aimed at generating Wikipedia (Liu et al., 2018) and another dedicated to generating a comprehensive summarization of multiple real-time news (Fabbri et al., 2019). Liu et al. (2018) concatenated multiple source documents into a long flat text and introduced a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoder-decoder architectures. Liu and Lapata (2019) introduced intermediate document representations and simply add the document representations to word representations for modeling the cross-document relationships. Compared with our proposed multi-granularity method, Liu and Lapata (2019) inclined to the traditional bottomup hierarchical method and don’t effectively utilize the hierarchical representations while ignoring the hierarchical relationships of sentences. Fabbri et al. (2019) incorporated MMR into a hierarchical pointer-generator network to address the information redundancy in multi-document summarization. 3 Our Approach Our model consists of a multi-granularity encoder, a sentence extractor, and a summary generator. Firstly, the multi-granularity encoder reads multiple input documents and learns the multi6246 granularity representations for words, sentences, and documents. Self-attention mechanisms are employed for capturing semantic relationships of the representations with same granularity, while crossattention mechanisms are employed for the information interaction between representations with different granularity. Fusion gates are used for integrating the information from different attention mechanisms. Then the sentence extractor scores sentences according to the learned sentence representations. Meanwhile, the summary generator produces the abstractive summary by attending to the word representations. In the following sections, we will describe the multi-granularity encoder, the sentence extractor, and the summary generator, respectively. 3.1 Multi-Granularity Encoder Given a cluster of documents, we establish explicit representations for documents, sentences, and words, and connect them within a hierarchical semantic relation graph. The multi-granularity encoder is a stack of L1 identical layers. Each layer has two sub-layers: the first is the multigranularity attention layer, and the second is multiple fully connected feed-forward networks. The multi-granularity attention sub-layer transfers semantic information between the different granularity and the same granularity, while the feed-forward network further aggregates the multi-granularity information. We employ multi-head attention to encode multi-granularity information and use a fusion gate to propagate semantic information to each other. Figure 1 shows the overview of the multi-granularity encoder layer, and Figure 2 illustrates how the semantic representations are updated, which takes the sentence representation as an example. Let wi,j,k be the k-th word of the sentence si,j in the document di. At the bottom of the encoder stack, each input word wi,j,k is converted into the vector representation ei,j,k by learned embeddings. We assign positional encoding to indicate the position of the word wi,j,k and three positions need to be considered, namely i (the rank of the document), j (the position of the sentence within the document), k (the position of the word within the sentence). We concatenate the three position embedding PEi, PEj,and PEk to get the final position embedding pi,j,k. The input word representation can be obtained by simply adding the word duplicate Word Sentence Document self-attention cross-attention Figure 1: The overview of the multi-granularity encoder layer. embedding ei,j,k and the position embedding pi,j,k: pi,j,k = [PEi; PEj; PEk] h0 wi,j,k = ei,j,k + pi,j,k (1) where the definition of positional encoding PE is consistent with the Transfomer (Vaswani et al., 2017). For convenience, we denote the output of l-th multi-granularity encoder layer as hl and the input for the first layer as h0. Symbols with subscripts wi,j,k, si,j and di are used to denote word, sentence, and document granularities, respectively. Both sentence representations h0 si,j and document representations h0 di are initialized to zeros. In each multi-granularity attention sub-layers, the word representation is updated by the information of word granularity and sentence granularity. We perform multi-head self-attention across the word representations in the same sentence hl−1 wi,j,∗= {hl−1 wi,j,k|wi,j,k ∈si,j} to get the context representation ˜hl wi,j,k. In order to propagate semantic information from sentence granularity to the word granularity, we duplicate sentence-aware representation ←−h l wi,j,k from corresponding sentence si,j and employ a fusion gate to integrate ˜hl wi,j,k and ←−h l wi,j,k to get the updated word representation fl wi,j,k. fl wi,j,k = Fusion  ˜hl wi,j,k, ←−h l wi,j,k  ˜hl wi,j,k = MHAtt  hl−1 wi,j,k, hl−1 wi,j,∗  ←−h l wi,j,k = hl−1 si,j (2) where MHAtt denotes the multi-head attention proposed in Vaswani et al. (2017) and Fusion denotes the fusion gate. hl−1 wi,j,k is the query and hl−1 wi,j,∗are 6247 Multi-Head Self-Attention Multi-Head Cross-Attention Feed-Forward Fusion Gate Fusion Gate Si,1 Si,2 Si,n ... Si,j wi,j,1 wi,j,2 wi,n,m ... di Figure 2: The multi-granularity encoder layer for updating sentence representation. The sentence representation is updated by using two fusion gates to integrate the information from different granularities. the keys and values for attention. The fusion gate works as z = σ ([x; y]Wf + bf) Fusion(x, y) = z x + (1 −z) y (3) where σ is the sigmoid function, parameters Wf ∈ R2∗dmodel×1 and bf ∈R. The sentence representation is updated from three sources: (1) We take the sentence representation hl−1 si,j as the query, the word representations hl−1 wi,j,∗= {hl−1 wi,j,k|wi,j,k ∈si,j} as the keys and values, to perform multi-head cross-attention to get the intermediate word-aware representation −→h l−1 si,j ; (2) Multi-head self-attention across sentence representations hl−1 si,∗= {hl−1 si,j |si,j ∈di} is performed to get the context representation ˜hl si,j; (3) In order to propagate document granularity semantic information to the sentence, we duplicate the document-aware representation ←−h l si,j from corresponding document di. −→h l si,j = MHAtt  hl−1 si,j , hl−1 wi,j,∗  ˜hl si,j = MHAtt  hl−1 si,j , hl−1 si,∗  ←−h l si,j = hl−1 di (4) Semantic representations from the three sources are fused by two fusion gate to get the updated sentence representation fl si,j. fl si,j = Fusion  Fusion −→h l si,j, ←−h l si,j  , ˜hl si,j  (5) To update the document representation, multihead self-attention across all document representations hl−1 d∗ = {hl−1 di } is performed to get the context representation ˜hl di. Meanwhile, we take the document representation hl−1 di as the query, sentence representations {hl−1 si,j |si,j ∈di} as the keys and values to perform multi-head cross-attention to get the intermediate sentence-aware representation −→h l di. A fusion gate is used to aggregate the above outputs ˜hl di and −→h l di. fl di = Fusion  ˜hl di, −→h l di  ˜hl di = MHAtt  hl−1 di , hl−1 d∗  −→h l di = MHAtt  hl−1 di , hl−1 si,∗  (6) The feed-forward network FFN is used to transform multiple-granularity semantic information further. To construct deep network, we use the residual connection (He et al., 2016) and layer normalization (Ba et al., 2016) to connect adjacent layers. ˜h=LayerNorm  hl−1+ fl hl =LayerNorm(˜h + FFN(˜h)) (7) where l ∈[1, L1], FFN consists of two linear transformations with a ReLU activation in between. Note that we used different FFN and LayerNorm for the different granularity. The final representation hL1 s is fed to the sentence extractor while hL1 w is fed to the summary generator. For convenience, we denote hL1 s as os, and hL1 w as ow. 3.2 Sentence Extractor we build a classifier to select sentences based on the sentence representations os from the multigranularity encoder. The classifier uses a linear transformation layer with the sigmoid activation function to get the prediction score for each sentence ˜ys = σ (osWo + bo) (8) where σ is the sigmoid function, parameters Wo ∈ Rdmodel×1 and bo ∈R. These scores are used to sort the sentences of multiple documents and produce the extracted summary. 3.3 Summary Generator The summary generator in our model is also a stack of L2 identical layers. The layer consists of three parts: a masked multi-head self-attention mechanism, a multi-head cross-attention mechanism, and 6248 a fully connected feed-forward network. As the input and output of multi-document summarization are generally long, the multi-head attention degenerates as the length increases (Liu and Lapata, 2019). Following Zhao et al. (2019) ’s idea, we adopt a sparse attention mechanism where each query only attends to the top-k values according to their weights calculated by the keys rather than all values in the original attention (Vaswani et al., 2017). And k is a hyper-parameter. This ensures that the generator focuses on critical information in the input and ignores much irrelevant information. We denote the multi-head sparse attention as MSAttn. Similar to the multi-granularity encoder, we add the positional encoding of words in the summary to the input embedding at the bottom of the decoder stack. We denote the output of the l-th layer as gl and the input for the first layer as g0. The self-attention sub-layer with masking mechanism is used to encode the decoded information. The masking mechanism ensures that the prediction of the position t depends only on the known output of the position before t. ˜g = LayerNorm gl−1+MSAttn gl−1, gl−1 (9) The cross-attention sub-layer take the selfattention output ˜g as the queries and the multigranularity encoder output ow as keys and values to performs multi-head sparse attention. The feedforward network is used to further transform the outputs. c = LayerNorm (˜g + MSAtt (˜g, ow)) gl = LayerNorm (c + FFN(c)) (10) The generation distribution pg t over the target vocabulary is calculated by feeding the output gL2 t to a softmax layer. pg t = softmax  gL2 t Wg + bg  (11) where Wg ∈Rdmodel×dvocab, bg ∈Rdvocab and dvocab is the size of target vocabulary. The copy mechanism (Gu et al., 2016) is employed to tackle the problem of out-of-vocabulary (OOV) words. We compute the copy attention εt with the decoder output gL2 and the input representations ow, and further obtain copy distribution pc t. εt = softmax(gL2 t o⊤ w + bε) pc t = X i,j,k εtz⊤ i,j,k (12) where zi,j,k is the one-hot indicator vector for wi,j,k and bε ∈Rdvocab. A gate is used over the the decoder output gL2 to control generating words from the vocabulary or copying words directly from the source text. The final distribution pt is the “mixture” of the two distributions pg t and pc t. ηt = σ  gL2 t Wη + bη  pt = ηt ∗pg t + (1 −ηt) ∗pc t (13) where σ is the sigmoid function, Wη ∈ Rdmodel×1, bη ∈R. 3.4 Objective Function We train the sentence extractor and the summary generator in a unified architecture in an end-toend manner. We use the cross entropy as both the extractor loss and the generator loss. Lext = −1 N N X n=1  y(n) s log ˜y(n) s +  1 −y(n) s  log  1 −˜y(n) s  Labs = −1 N N X n=1 log p(y(n) w ) (14) where ys is the ground-truth extracted label, yw is the ground-truth summary and N is the number of samples in the corpus. The final loss is as below Lmix = Labs + λLext (15) where λ is a hyper-parameter. 4 Experiment 4.1 Dataset We experiment with the latest released Multi-News dataset (Fabbri et al., 2019), which is the first large scale multi-document news summarization dataset. It contains about 44972 pairs for training, 5622 pairs for development, and 5622 for the test. Each summary of the average of 264 words is paired with a documents cluster of average 2103 words discussing a topic. The number of source documents per summary presents as shown in Table 1 . While the dataset contains abstractive gold summaries, it is not readily suited to training extractive models. So we follow the work of Zhou et al. (2018) on extractive summary labeling, constructing gold-label sequences by greedily optimizing ROUGE-2 F1 on the gold-standard summary. 6249 # of source Frequency # of source Frequency 2 23,894 7 382 3 12,707 8 209 4 5,022 9 89 5 1,873 10 33 6 763 Table 1: The distribution of number of source articles per instance in Multi-News dataset. 4.2 Implementation Details We set our model parameters based on preliminary experiments on the development set. We prune the vocabulary to 50k and use the word in the source documents with maximum weight in copy attention to replace the unknown word of the generated summary. We set the dimension of word embeddings and hidden units dmodel to 512, feed-forward units to 2048. We set 8 heads for multi-head selfattention, masked multi-head sparse self-attention, and multi-head sparse cross-attention. We set the number of multi-granularity encoder layer L1 to 5 and summary decoder layer L2 to 6. We set dropout (Srivastava et al., 2014) rate to 0.1 and use Adam optimizer with an initial learning rate α = 0.0001, momentum β1 = 0.9, β2 = 0.999 and weight decay ϵ = 10−5. When the valid loss on the development set increases for two consecutive epochs, the learning rate is halved. We use a mini-batch size of 10, and set the hyper-parameter k = 5 and λ = 2. Given the salience score predicted by the sentence extractor, we apply a simple greedy procedure to select sentences. We select one sentence based on the descending order of the salience scores and append to the extracted summary until the summary reaches 300 words. We disallow repeating the same trigram (Paulus et al., 2018; Edunov et al., 2019) and use beam search with a beam size of 5 for summary generator. 4.3 Metrics and Baselines We use ROUGE (Lin, 2004) to evaluate the produced summary in our experiments. Following previous work, we report ROUGE F11 on MultiNews dataset. We compare our model with several typical baselines and several baselines proposed in the latest years. Lead-3 is an extractive baseline which concatenates the first-3 sentences of each source document as a summary. LexRank (Erkan and Radev, 2004) 1The ROUGE evaluation option: -c 95 -2 4 -U -r 1000 -n 4 -w 1.2 -a Model R-1 R-2 R-SU4 Lead-3 39.41 11.77 14.51 LexRank (Erkan and Radev, 2004) 38.27 12.70 13.20 TextRank (Mihalcea and Tarau, 2004) 38.44 13.10 13.50 MMR(Carbonell and Goldstein, 1998) 38.77 11.98 12.91 HIBERT (Zhang et al., 2019) 43.86 14.62 18.34 PGN (See et al., 2017) 41.85 12.91 16.46 CopyTransformer(Gehrmann et al., 2018) 43.57 14.03 17.37 Hi-MAP(Fabbri et al., 2019) 43.47 14.89 17.41 HF(Liu and Lapata, 2019) 43.85 15.60 18.80 MGSum-ext 44.75 15.75 19.30 MGSum-abs 46.00 16.81 20.09 oracle ext 49.02 29.78 29.19 Table 2: ROUGE F1 evaluation results on the MultiNews test set. is an unsupervised graph based method for computing relative importance in extractive summarization. TextRank (Mihalcea and Tarau, 2004) is also an unsupervised algorithm while sentence importance scores are computed based on eigenvector centrality within weighted-graphs for extractive sentence summarization. MMR (Carbonell and Goldstein, 1998) extracts sentences with a ranked list of the candidate sentences based on the relevance and redundancy. HIBERT (Zhang et al., 2019) first encodes each sentence using the sentence Transformer encoder, and then encode the whole document using the document Transformer encoder. It is a single document summarization model and cannot handle the hierarchical relationship of documents. We migrate it to multi-document summarization by concatenating multiple source documents into a long sequence. These extractive methods are set to give an output of 300 tokens. PGN (See et al., 2017) is an RNN based model with an attention mechanism and allows the system to copy words from the source text via pointing for abstractive summarization. CopyTransformer (Gehrmann et al., 2018) augments Transformer with one of the attention heads chosen randomly as the copy distribution. Hi-MAP (Fabbri et al., 2019) expands the pointer-generator network model into a hierarchical network and integrates an MMR module to calculate sentence-level scores, which is trained on the Multi-News corpus. The baseline above has been compared and reported in the Fabbri et al. (2019), which releases the Multi-News dataset, and we directly cite the results of the above methods from this paper. HT (Liu and Lapata, 2019) is a Transformer based model with an attention mechanism to share information cross-document for abstractive multi-document summarization. It is used 6250 initially to generate Wikipedia, and we reproduce their method for the multi-document news summarization. 4.4 Automatic Evaluation Following previous work, we report ROUGE-1 (unigram), ROUGE-2 (bigram), and ROUGE-SU4 (skip bigrams with a maximum distance of 4 words) scores as the metrics for automatic evaluation (Lin and Hovy, 2003). In Table 2, we report the results on the Multi-News test set and our proposed multi-granularity model (denoted as MGSum) outperforms various previous models. Our abstractive method achieves scores of 46.00, 16.81, and 20.09 on the three ROUGE metrics while our extractive method achieves scores of 44.75, 15.75, and 19.30 on the three ROUGE metrics. We can also see that the abstractive methods perform better than the extractive methods. We attribute this result to the observation that the gold summary of this dataset tends to use new expressions to summarize the original input documents. Owing to the characteristics of the news, lead3 is superior to all unsupervised extractive methods. Our extractive method achieves about 1.13 points improvement on ROUGE-2 F1 compared with HIBERT. We attribute the improvement to two aspects: Firstly, the abstractive objective can promote the recognition of important sentences for the extractive model with the multi-granularity interaction network. Besides, while extractive goldlabel sequences are obtained by greedily optimizing ROUGE-2 F1 on the gold-standard summary, gold labels may not be accurate. Joint learning of two objectives may correct some biases for the extractive model due to the inaccurate labels. We calculate the oracle result based on the gold-label extractive sequences, which achieves a score of 29.78 on ROUGE-2 F1 and is 14.03 points higher than the score of our extractive method. While there is a big gap between our model and the oracle, more efforts can be made to improve extractive performance. Among the abstractive baselines, CopyTransformer performs much better than PGN and achieves 1.12 points improvement on the ROUGE2 F1, which demonstrates the superiority of the Transformer architecture. Our abstractive model gains an improvement of 2.78 points compared with CopyTransformer, 1.92 points compared with Hi-MAP, and 1.21 points compared with HF on 2.81 2.89 2.73 2.98 3.05 2.95 2.82 2.97 2.96 3.07 3.06 3.03 3.22 3.38 3.29 fluency informativeness non-redundancy PGN CopyTransformer Hi-MAP HF MGSum Figure 3: Human evaluation. The compared system summaries are rated on a Likert scale of 1(worst) to 5(best). ROUGE-2 F1, which verifies the effectiveness of the proposed multi-granularity interaction network for the summary generation. 4.5 Human Evaluation To evaluate the linguistic quality of generated summaries, we carry out a human evaluation. We focus on three aspects: fluency, informativeness, and non-redundancy. The fluency indicator focuses on whether the summary is well-formed and grammatical. The informativeness indicator can reflect whether the summary covers salient points from the input documents. The measures whether the summary contains repeated information. We sample 100 instances from the Multi-News test set and employ 5 graduate students to rate each summary. Each human judgment evaluates all outputs of different systems for the same sample. 3 human judgments are obtained for every sample, and the final scores are averaged across different judges. Results are presented in Figure 3. We can see that our model performs much better than all baselines. In the fluency indicator, our model achieves a high score of 3.22, which is higher than 2.98 of CopyTransformer and 3.07 of HF, indicating that our model can reduce the grammatical errors and improve the readability of the summary. In the informativeness indicator, our model is 0.32 better than HF on ROUGE-2 F1. It indicates that our model can effectively capture the salient information. In the non-redundancy indicator, MGSum outperforms all baselines by a large margin, that indicates the multi-granularity semantic information and joint learning with extractive summarization does help to avoid the repeating information of the generated summary. 6251 Model R-1 R-2 R-SU4 MGSum-ext 45.04 15.98 19.53 only sentence extractor 44.65 15.67 19.27 without doc representation 44.67 15.58 19.15 MGSum-abs 46.08 16.92 20.15 only summary generator 45.57 16.32 19.56 without doc representation 45.71 16.62 19.80 without doc&sent representation 44.05 15.31 18.27 Table 3: Results of ablation study on the Multi-News development set. 4.6 Ablation Study We perform an ablation study on the development set to investigate the influence of different modules in our proposed MGSum model. Modules are tested in four ways: (1) we remove the sentence extractor and only train the generator to verify the effectiveness of joint learning on the abstractive summarization; (2) we remove the summary generator part and only train the sentence extractor to verify the effectiveness of joint learning on the extractive summarization; (3) we remove the document representation and use only the sentence and word representations to verify the effectiveness of the document granularity semantic information; (4) We remove the document and sentence representation and use only the word representation to verify the importance of the sentence representation further. Since there are no interactions between the sentences of different documents without document representations, we establish connections between all sentences after the document representation is removed. Furthermore, we also establish connections between all the words after the sentence representation is removed, and the model degenerates into Transformer at this time. Table 3 presents the results. We find that the ROUGE-2 F1 score of extractive summarization drops by 0.31 after the summary generator is removed. This indicates that the joint learning method helps extractive summarization to benefit from the abstractive summarization. ROUGE2 F1 score of abstractive summarization drops by 0.6 after the sentence extractor is removed. This indicates that extractive summarization does help abstractive summarization identify important sentences during the interactive encoding phrase. ROUGE-2 F1 score of extractive summarization drops by 0.4, while the ROUGE-2 F1 score of abstractive summarization drops by 0.3 after the document representation is removed. It indicates esHuman: – it ’ s a race for the governor ’ s mansion in 11 states today , and the gop could end the night at the helm of more than two-thirds of the 50 states . the gop currently controls 29 of the country ’ s top state offices ; it ’ s expected to keep the three republican ones that are up for grabs ( utah , north dakota , and indiana ) , and wrest north carolina from the dems . that brings its toll to 30 , with the potential to take three more , reports npr . races in montana , new hampshire , and washington are still too close to call , and in all three , democrat incumbents aren ’ t seeking reelection . the results could have a big impact on health care , since a supreme court ruling grants states the ability to opt out of obamacare ’ s medicaid expansion . ” a romney victory would dramatically empower republican governors , ” said one analyst . click for npr ’ s state-by-state breakdown of what could happen . HF: – delaware , new hampshire , and missouri are expected to notch safe wins in 11 states , reports npr . the state ’ s top state of the state has seen its top state offices , and it ’ s expected to be more than twothirds of the nation ’ s state , reports the washington post . the top 10 : montana , montana , and rhode island . indiana : missouri : the state is home to the top of the list of state offices . new hampshire : montana : incumbent john kasich : he ’ s the first woman to hold a state seat in the state , notes the huffington post . north carolina : the only state to win gop-held seats in vermont and delaware . new jersey : the biggest state in the history of the year has seen a population of around 40 % of the population , reports ap . montana : new hampshire and missouri : a state department of emergency has been declared a state of emergency . click for the full list , or check out a list of the states that voted tonight . MGSum-ext: gop eyes gains as voters in 11 states pick governors enlarge this image toggle caption jim cole/ap jim cole/ap voters in 11 states will pick their governors tonight , and republicans appear on track to increase their numbers by at least one , with the potential to extend their hold to more than two-thirds of the nation ’ s top state offices . and that ’ s health care , says political scientist thad kousser , co-author of the power of american governors . ” republicans currently hold 29 governorships , democrats have 20 , and rhode island ’ s gov . lincoln chafee is an independent . eight of the gubernatorial seats up for grabs are now held by democrats ; three are in republican hands . polls and race analysts suggest that only three of tonight ’ s contests are considered competitive , all in states where incumbent democratic governors aren ’ t running again : montana , new hampshire and washington . MGSum-abs: – voters in 11 states will pick their governors tonight , and republicans appear on track to increase their numbers by at least one , with the potential to extend their hold to more than two-thirds of the nation ’ s top state offices . republicans currently hold 29 governorships , democrats have 20 , and rhode island ’ s gov . lincoln chafee is an independent . the seat is expected to be won by former charlotte mayor walter dalton , who won his last election with 65 % of the vote , reports the washington post . democrats are expected to hold on to their seats in west virginia and missouri , and democrats are likely to hold seats in vermont and delaware , reports npr . polls and race analysts say that only three of tonight ’ s contests are considered competitive , and all in states where incumbent democratic governors aren ’ t running again . ” no matter who wins the presidency , national politics is going to be stalemated on the affordable care act , ” says one political scientist . Table 4: Sample summaries for a document cluster from the Multi-News test set. The underline shows the overlap parts between our abstractive summary and human summary. The extractive and abstractive summary generated by MGSum have the high overlap (different overlaps are marked in different colors). tablishing the document representation to simulate the relationships between documents is necessary to improve the performance of both extractive and abstractive summarization. ROUGE-2 F1 score drops by 1.61 compared with MGSum and 1.01 compared with the only summary generator after removing both the document representation and the sentence representation. And there is no extractive summarization to co-promote the recognition of important information for abstractive summarization after the sentence representation is removed. It indicates the semantic information of sentence granularity is of great importance to encode multi6252 ple documents. 4.7 Case Study In Table 4, we present example summaries generated by strong baseline HF, and our extractive and abstractive methods. The output of our model has the highest overlap with the ground truth. Moreover, our extractive and abstractive summary show consistent behavior with the high overlap, which further indicates that the two methods can jointly promote the recognition of important information. Compared with the extracted summary, the generated summary is more concise and coherent. 5 Conclusion and Future Work In this work, we propose a novel multi-granularity interaction network to encode semantic representations for documents, sentences, and words. It can unify the extractive and abstractive summarization by utilizing the word representations to generate the abstractive summary and the sentence representations to extract sentences. Experiment results show that the proposed method significantly outperforms all strong baseline methods and achieves the best result on the Multi-News dataset. In the future, we will introduce more tasks like document ranking to supervise the learning of the multi-granularity representations for further improvement. Acknowledgments This work was supported by National Natural Science Foundation of China (61772036), Tencent AI Lab Rhino-Bird Focused Research Program (No.JR201953) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We thank the anonymous reviewers for their helpful comments. Xiaojun Wan is the corresponding author. References Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. CoRR, abs/1607.06450. Ziqiang Cao, Furu Wei, Li Dong, Sujian Li, and Ming Zhou. 2015. Ranking with recursive neural networks and its application to multi-document summarization. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA, pages 2153–2159. AAAI Press. Jaime G. Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In SIGIR ’98: Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, August 24-28 1998, Melbourne, Australia, pages 335–336. ACM. Asli C¸ elikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1662–1675. Association for Computational Linguistics. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 675–686. Association for Computational Linguistics. Janara Christensen, Mausam, Stephen Soderland, and Oren Etzioni. 2013. Towards coherent multidocument summarization. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, June 9-14, 2013, Westin Peachtree Plaza Hotel, Atlanta, Georgia, USA, pages 1163–1173. The Association for Computational Linguistics. Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 16, 2018, Volume 2 (Short Papers), pages 615–621. Association for Computational Linguistics. Sergey Edunov, Alexei Baevski, and Michael Auli. 2019. Pre-trained language model representations for language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4052–4059. Association for Computational Linguistics. G¨unes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. J. Artif. Intell. Res., 22:457–479. Alexander Richard Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R. Radev. 2019. Multi-news: A 6253 large-scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 1074–1084. Association for Computational Linguistics. Sebastian Gehrmann, Yuntian Deng, and Alexander M. Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4098–4109. Association for Computational Linguistics. Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. Logan Lebanoff, Kaiqiang Song, and Fei Liu. 2018. Adapting the neural encoder-decoder framework from single to multi-document summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4131–4141. Association for Computational Linguistics. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Chin-Yew Lin and Eduard H. Hovy. 2003. Automatic evaluation of summaries using n-gram cooccurrence statistics. In Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLTNAACL 2003, Edmonton, Canada, May 27 - June 1, 2003. The Association for Computational Linguistics. Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Yang Liu and Mirella Lapata. 2019. Hierarchical transformers for multi-document summarization. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5070–5081. Association for Computational Linguistics. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing , EMNLP 2004, A meeting of SIGDAT, a Special Interest Group of the ACL, held in conjunction with ACL 2004, 25-26 July 2004, Barcelona, Spain, pages 404–411. ACL. Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive summarization. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 August 4, Volume 1: Long Papers, pages 1073–1083. Association for Computational Linguistics. Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929–1958. Sandeep Subramanian, Raymond Li, Jonathan Pilault, and Christopher J. Pal. 2019. On extractive and abstractive neural document summarization with transformer language models. CoRR, abs/1909.03186. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998–6008. Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, and Dragomir R. Radev. 2017. Graph-based neural multi-document summarization. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), Vancouver, Canada, August 34, 2017, pages 452–462. Association for Computational Linguistics. Jianmin Zhang, Jiwei Tan, and Xiaojun Wan. 2018. Towards a neural network approach to abstractive multi-document summarization. CoRR, abs/1804.09010. Xingxing Zhang, Furu Wei, and Ming Zhou. 2019. HIBERT: document level pre-training of hierarchical bidirectional transformers for document summarization. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5059–5069. Association for Computational Linguistics. 6254 Guangxiang Zhao, Junyang Lin, Zhiyuan Zhang, Xuancheng Ren, Qi Su, and Xu Sun. 2019. Explicit sparse transformer: Concentrated attention through explicit selection. CoRR, abs/1912.11637. Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. 2018. Neural document summarization by jointly learning to score and select sentences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 1520, 2018, Volume 1: Long Papers, pages 654–663. Association for Computational Linguistics.
2020
556
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6255–6261 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6255 Tetra-Tagging: Word-Synchronous Parsing with Linear-Time Inference Nikita Kitaev and Dan Klein Computer Science Division University of California, Berkeley {kitaev, klein}@cs.berkeley.edu Abstract We present a constituency parsing algorithm that, like a supertagger, works by assigning labels to each word in a sentence. In order to maximally leverage current neural architectures, the model scores each word’s tags in parallel, with minimal task-specific structure. After scoring, a left-to-right reconciliation phase extracts a tree in (empirically) linear time. Our parser achieves 95.4 F1 on the WSJ test set while also achieving substantial speedups compared to current state-of-the-art parsers with comparable accuracies. 1 Introduction Recent progress in NLP, and practical machine learning applications more generally, has been driven in large part by increasing availability of compute. These advances are made possible by an ecosystem of specialized hardware accelerators such as GPUs and TPUs, highly tuned kernels for executing particular operations, and the ability to amortize computational costs across tasks through approaches such as pre-training and multitask learning. This places particular demands for a model to be efficient: it must parallelize, it must maximally use standard subcomponents that have been heavily optimized, but at the same time it must adequately incorporate task-specific insights and inductive biases. Against this backdrop, constituency parsing stands as a task where custom architectures are prevalent and parallel execution is limited. Stateof-the-art approaches use custom architecture components, such as the tree-structured networks of RNNG (Dyer et al., 2016) or the per-span MLPs in chart parsers (Stern et al., 2017; Kitaev et al., 2019). Approaches to inference range from autoregressive generation, to cubic-time CKY, to A* search – none of which are readily parallelizable. Our goal is to demonstrate a parsing algorithm that makes effective use of the latest hardware. The desiderata for our approach are (a) to maximize parallelism, (b) to minimize task-specific architecture design, and (c) to lose as little accuracy as possible compared to a state-of-the-art highly-specialized model. To do this, we propose an algorithm that reduces parsing to tagging, where all tags are predicted in parallel using a standard model architecture such as BERT (Devlin et al., 2019). Tagging is followed by a minimal inference procedure that is fast enough to schedule on the CPU because it runs in linear time with low constant factors (subject to mild assumptions). 2 Related Work Label-based parsing A variety of approaches have been proposed to mostly or entirely reduce parsing to a sequence labeling task. One family of these approaches is supertagging (Bangalore and Joshi, 1999), which is particularly common for CCG parsing. CCG imposes constraints on which supertags may form a valid derivation, necessitating complex search procedures for finding a highscoring sequence of supertags that is self-consistent. An example of how such a search procedure can be implemented is the system of Lee et al. (2016), which uses A∗search. This search procedure is not easily parallelizable on GPU-like hardware, and has a worst-case serial running time that is exponential in the sentence length. G´omez-Rodr´ıguez and Vilares (2018) propose a different approach that fully reduces parsing to sequence labeling, but the label set size is unbounded: it expands with tree depth and related properties of the input, rather than being fixed for any given language. There have been attempts to address this by adding redundant labels, where the model learns to switch between tagging schemes in an attempt to avoid 6256 the problem of unseen labels (Vilares et al., 2019), but that only increases the label inventory rather than restricting it to a finite set. Our approach, on the other hand, uses just 4 labels in its simplest formulation (hence the name tetra-tagging). Shift-reduce transition systems A number of parsers proposed in the literature can be categorized as shift-reduce parsers (Henderson, 2003; Sagae and Lavie, 2005; Zhang and Clark, 2009; Zhu et al., 2013). These systems rely on generating sequences of actions, which need not be evenly distributed throughout the sentence. For example, the construction of a deep right-branching tree might involve a series of shift actions (one per word in the sentence), followed by equally many consecutive reduce actions that all cluster at the end of the sentence. Due to the uneven alignment between actions and locations in a sentence, neural network architectures in recent shift-reduce systems (Vinyals et al., 2015; Dyer et al., 2016; Liu and Zhang, 2017) generally follow an encoder-decoder approach with autoregressive generation rather than directly assigning labels to positions in the input. Our proposed parser is also transition-based, but there are guaranteed to be exactly two decisions to make between one word and the next. This fixed alignment allows us to predict all actions in parallel rather than autoregressively. Chart parsing Chart parsers fundamentally operate over span-aligned rather than word-aligned representations. For instance, the size of the chart in the CKY algorithm (Cocke, 1970; Kasami, 1966; Younger, 1967) is quadratic in the length of the sentence, and the algorithm itself has cubic running time. This is true for both classical methods and more recent neural approaches (Durrett and Klein, 2015; Stern et al., 2017). The construction of a chart involves a non-trivial (quadratic) computation that is specialized to parsing, and implementing the CKY algorithm on a hardware accelerator is a nontrivial and hardware-specific task. Left-corner parsing To achieve all of our desiderata, we combine aspects of the previouslymentioned approaches with ideas drawn from a long line of work on left-corner parsing (Rosenkrantz and Lewis, 1970; Nijholt, 1979; van Schijndel et al., 2013; Noji et al., 2016; Shain et al., 2016, inter alia). Much of past work highlights the benefits of a left-corner formulation for memory efficiency, with implications for psycholin1 2 3 4 $ A B C D E → ⇒ → ⇐ → ⇐ ← ⇒ ← Figure 1: An example tree with the corresponding labels. The nonterminal nodes have been numbered based on an in-order traversal. guistic plausibility of the approach. We, on the other hand, demonstrate how to leverage these same considerations to achieve parallel tagging and linear time complexity of the subsequent inference procedure. Further, past work has used grammars (Rosenkrantz and Lewis, 1970), or transformed labeled trees (Johnson, 1998; Schuler et al., 2010). On the other hand, it is precisely the lack of an explicit grammar that allows us to formulate our linear-time inference algorithm. 3 Method To introduce our method, we first restrict ourselves to only consider unlabeled full binary trees (where every node has either 0 or 2 children). We defer the discussion of labeling and non-binary structure to Section 3.5. 3.1 Trees to tags Consider the example tree shown in Figure 1. The tree is fully binarized and consists of 5 terminal symbols (A,B,C,D,E) and 4 nonterminal nodes (1,2,3,4). For any full binary parse tree, the number of nonterminals will always be one less than the number of words, so we can construct a one-toone mapping between nonterminals and fenceposts (i.e. positions between words): each fencepost is matched with the shortest span that crosses it. For each node, we calculate the direction of its parent, i.e. whether the node is a left-child or a right-child. Although the root node in the tree does not have a parent, by convention we treat it as though it were a left-child (in Figure 1, this is denoted by the dummy parent labeled $). 6257 Our scheme associates each word and fencepost in the sentence with one of four labels: • “ → ”: This terminal node is a left-child. • “ ← ”: This terminal node is a right-child. • “ ⇒ ”: The shortest span crossing this fencepost is a left-child. • “ ⇐ ”: The shortest span crossing this fencepost is a right-child. We refer to our method as tetra-tagging because it uses only these four labels to represent binary bracketing structure. 3.2 Model Given a sentence with n words, there are altogether 2n −1 decisions (each with two options). By the construction above, it is evident that every tree has one (and only one) corresponding label representation. To reduce parsing to tagging, we simply use a neural network to predict which tag to select for each of the 2n −1 decisions required. Our implementation predicts these tag sequences from pre-trained BERT word representations. Two independent projection matrices are applied to the feature vector for the last sub-word unit within each word: one projection produces scores for actions corresponding to that word, and the other for actions at the following fencepost. A softmax loss is applied, and the model is trained to maximize the likelihood of the correct action sequence. 3.3 Tags to trees: transition system To map from label sequences back to trees, we reinterpret the four labels (“ → ”, “ ← ”, “ ⇒ ”, “ ⇐ ”) as actions in a left-corner transition system. The transition system maintains a stack of partiallyconstructed trees, where each element of the stack is one of the following: (a) a terminal symbol, i.e. a word; (b) a complete tree; or (c) a tree with a single empty slot, denoted by the special element ∅. An empty slot must be the rightmost leaf node in its tree, but may occur at any depth. The tree operations used are: (a) MAKENODE(left-child, right-child), which creates a new tree node; and (b) COMBINE(parent-tree, childtree), which replaces the empty slot ∅in the parent tree with the child tree. Decoding uses Algorithm 1; an example derivation is shown in Figure 2. Algorithm 1 Decoding algorithm Input: A list of words (words) and a corresponding list of tetra-tags (actions) Output: A parse tree 1: stack ←[] 2: buffer ←words 3: for action in actions do 4: switch action do 5: case “ → ” 6: leaf ←POP-FIRST(buffer) 7: stack ←PUSH-LAST(stack, leaf) 8: end case 9: case “ ← ” 10: leaf ←POP-FIRST(buffer) 11: stack[−1] ←COMBINE(stack[−1], leaf) 12: end case 13: case “ ⇒ ” 14: stack[−1] ←MAKE-NODE(stack[−1], ∅) 15: end case 16: case “ ⇐ ” 17: tree ←POP-LAST(stack) 18: tree ←MAKE-NODE(tree, ∅) 19: stack[−1] ←COMBINE(stack[−1], tree) 20: end case 21: end switch 22: end for ▷The stack should only have one element 23: return stack[0] Each action in the transition system is responsible for adding a single tree node onto the stack: the actions “ → ” and “ ← ” do this by shifting in a leaf node, while the actions “ ⇒ ” and “ ⇐ ” construct a new non-terminal node. The transition system maintains the invariant that the topmost stack element is a complete tree if and only if a leaf node was just shifted (i.e. the last action was either “ → ” or “ ← ”), and all other stack elements have a single empty slot. The actions “ ← ” and “ ⇐ ” both make use of the COMBINE operation to fill an empty slot on the stack with a newly-introduced node, which makes the new node a right-child. New nodes from the actions “ → ” and “ ⇒ ”, on the other hand, are introduced directly onto the stack and can become left-children via a later MAKE-NODE operation. As a result, the behavior of the four actions (“ → ”, “ ← ”, “ ⇒ ”, “ ⇐ ”) matches the label definitions from the previous section. 3.4 Inference The goal of inference is to select the sequence of labels that is assigned the highest probability by the tagging model. It should be noted that not all sequences of labels are valid under our transition system. In particular: • The first action must be “ → ”, because the stack is initially empty and the only valid ac6258 tion is to shift the first word in the sentence from the buffer onto the stack. • The action “ ⇐ ” relies on there being more than one element on the stack (lines 17-19 of Algorithm 1). • After executing all actions, the stack should contain a single element. Due to the invariant that the top stack element after a “ → ” or “ ← ” action is always a tree with no empty slots, this single stack element is guaranteed to be a complete tree that spans the full sentence. We observe that the validity constraints for our transition system can be expressed entirely in terms of the number of stack elements at each point in the derivation, and do not depend on the precise structure of those elements. This property enables an optimal and efficient dynamic program for finding the valid sequence of labels that has the highest probability under the model. The dynamic program maintains a table of the highest-scoring parser state for each combination of number of actions taken and stack depth. Prior to taking any actions, the stack must be empty. The algorithm then proceeds left-to-right through the sentence to fill in highest-scoring stack configurations after action 1, 2, etc. The dynamic program can be visualized as finding the shortest path through a graph like Figure 3, where each actioncount/stack-depth combination is represented by a node, and a transition is represented by an edge with weight equal to the model-predicted score of the associated tag. The time complexity of this dynamic program depends on the number of actions (which is 2n −1, where n is the length of the sentence), as well as the maximum possible depth of the stack (d). A leftcorner transition system has the property that stack depth tends to be small for parse trees of natural language (Abney and Johnson, 1991; Schuler et al., 2010). In practice, the largest stack depth observed at any point in the derivation for any tree in the Penn Treebank is 8. By comparison, the median sentence length in the data is 23, and the longest sentence contains over 100 words. As a result, we can cap the maximum stack depth allowed in our inference procedure to d = 8, which means that the O(nd2) time complexity of inference is effectively O(n). In other words, our inference procedure will, in practice, take linear time in the length of the sentence. Action Stack Buffer (0) empty A B C D E (1) → $ A B C D E (2) ⇒ 1 ∅ $ A B C D E (3) → 1 ∅ $ A $ B C D E (4) ⇐ 1 2 ∅ $ A B C D E (5) → 1 2 ∅ $ A B $ C D E (6) ⇐ 1 2 3 $ A B C ∅ D E (7) ← 1 2 3 $ A B C D E (8) ⇒ 1 2 3 4 $ A B C D ∅ E (9) ← 1 2 3 4 $ A B C D E empty Figure 2: An example derivation under our transition system. S 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 · · · · · · · · · · · · G 2 3 4 → ⇒ ⇒ ⇒ ⇒ ⇐ ⇐ ⇐ ← ← ← ← → → → ⇒ ⇒ ⇒ ⇒ ⇐ ⇐ ⇐ ← ← ← ← → → → Figure 3: Paths in this grid correspond to sequences of tags, where paths starting at S and arriving at G are valid trees. Numbers represent the number of elements on the stack. 6259 Sents/s Hardware F1 Vilares et al. (2019) 942 1x GPU 91.13 Kitaev et al. (2019)∗ 39 1x GPU 95.59 Zhou and Zhao (2019)∗ – – 95.84 This work∗ 1200 1x TPU v3-8 95.44 Table 1: Comparison of F1 scores and inference speeds on the WSJ test set. ∗Models using BERTLARGE (Devlin et al., 2019) word representations fine-tuned from the same initial parameters. 1 2 3 4 5 6 7 8 Stack limit (number of elements) 0 20 40 60 80 100 Coverage (% of trees representable) F1 Figure 4: With a modest maximum stack size, the tetratagging transition system has near-complete coverage of the development data. Our parser’s F1 score closely tracks the fraction of gold trees that can be represented. 3.5 Handling of labels and non-binary trees Each of our four actions creates a single node in the binary tree. Labeling a node can therefore be incorporated into the corresponding action; for example, the action “ ⇒ S” will construct an S node that is a left-child in the tree. We do not impose any constraints on valid label configurations, so our inference procedure remains virtually unchanged. To handle non-binary trees, we first collapse all unary chains by introducing additional labels. For example, a clause that consists only of a verb phrase would be assigned the label S-VP. We then ensure that each non-terminal node has exactly two children by applying fully right-branching binarization, where a dummy label is introduced and assigned to nodes generated as a result of binarization. During inference, a post-processing step undoes these transformations. 4 Results Our proposed parser is designed to rank syntactic decisions entirely in parallel, with inference reduced to a minimal linear-time algorithm. Its neural architecture consists almost entirely of BERT layers, with the only additions being two trainable projection matrices. To verify our approach, we train our parser on the Penn Treebank (Marcus et al., 1993) and evaluate its efficiency and accuracy when running on Cloud TPU v3 hardware. In Table 1, we compare with two classes of recent work. The parser by Vilares et al. (2019) is one of the fastest reported in the recent literature, but it trails the state-of-the-art model by more than 4 F1 points. In contrast, models by Zhou and Zhao (2019) and Kitaev et al. (2019) achieve the highestreported numbers when fine-tuning from the same initial BERTLARGE checkpoint that we use to train our tetra-tagger. However, these latter models are slower than our tetra-tagging approach and feature inference algorithms with high polynomial complexity that are difficult to adapt to accelerators such as the TPU. Our approach is able to achieve both high throughput and high F1, with only small losses in accuracy compared to the best BERT-based approaches. In Figure 4, we plot the parser’s accuracy across different settings for the maximum stack depth. The F1 score rapidly asymptotes as the stack size limit is increased, which validates our claim that inference can run in linear time. 5 Conclusion We present a reduction from constituency parsing to a tagging task with two binary structural decisions and two labeling decisions per word. Remarkably, probabilities for these tags can be estimated fully in parallel by a simple classification layer on top of a neural network architecture such as BERT. We hope that this formulation can be useful as a simple and low-overhead way of integrating syntax into any neural NLP model, including for multi-task training and to predict syntactic annotations during inference. By reducing the task-specific architecture components to a minimum, our method can be rapidly adapted as new modeling techniques, efficiency optimizations, and hardware accelerators become available. Code for our approach is available at github.com/nikitakit/tetra-tagging. Acknowledgments This research was supported by DARPA through the XAI program and by the National Science Foundation under Grant No. 1618460. We would like to thank the Google Cloud TPU team for their hardware support. We are also grateful to the members of the Berkeley NLP group and the anonymous reviewers for their helpful feedback. 6260 References Steven P. Abney and Mark Johnson. 1991. Memory requirements and local ambiguities of parsing strategies. Journal of Psycholinguistic Research, 20(3):233–250. Srinivas Bangalore and Aravind K. Joshi. 1999. Supertagging: An approach to almost parsing. Computational Linguistics, 25(2):237–265. John Cocke. 1970. Programming languages and their compilers: Preliminary notes. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Greg Durrett and Dan Klein. 2015. Neural CRF parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 302–312, Beijing, China. Association for Computational Linguistics. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199–209, San Diego, California. Association for Computational Linguistics. Carlos G´omez-Rodr´ıguez and David Vilares. 2018. Constituent parsing as sequence labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1314– 1324, Brussels, Belgium. Association for Computational Linguistics. James Henderson. 2003. Inducing history representations for broad coverage statistical parsing. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 103– 110. Mark Johnson. 1998. Finite-state approximation of constraint-based grammars using left-corner grammar transforms. In COLING 1998 Volume 1: The 17th International Conference on Computational Linguistics. Tadao Kasami. 1966. An efficient recognition and syntax-analysis algorithm for context-free languages. Coordinated Science Laboratory Report no. R-257. Nikita Kitaev, Steven Cao, and Dan Klein. 2019. Multilingual constituency parsing with self-attention and pre-training. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3499–3505, Florence, Italy. Association for Computational Linguistics. Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2016. Global neural CCG parsing with optimality guarantees. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2366–2376, Austin, Texas. Association for Computational Linguistics. Jiangming Liu and Yue Zhang. 2017. In-order transition-based constituent parsing. Transactions of the Association for Computational Linguistics, 5:413–424. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Anton Nijholt. 1979. Structure preserving transformations on non-left-recursive grammars. In International Colloquium on Automata, Languages, and Programming, pages 446–459. Springer. Hiroshi Noji, Yusuke Miyao, and Mark Johnson. 2016. Using left-corner parsing to encode universal structural constraints in grammar induction. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 33–43, Austin, Texas. Association for Computational Linguistics. Daniel J. Rosenkrantz and Philip M. Lewis. 1970. Deterministic left corner parsing. In Switching and Automata Theory, 1970., IEEE Conference Record of 11th Annual Symposium On, pages 139–152. IEEE. Kenji Sagae and Alon Lavie. 2005. A classifier-based parser with linear run-time complexity. In Proceedings of the Ninth International Workshop on Parsing Technology, pages 125–132, Vancouver, British Columbia. Association for Computational Linguistics. William Schuler, Samir AbdelRahman, Tim Miller, and Lane Schwartz. 2010. Broad-coverage parsing using human-like memory constraints. Computational Linguistics, 36(1):1–30. Cory Shain, William Bryce, Lifeng Jin, Victoria Krakovna, Finale Doshi-Velez, Timothy Miller, William Schuler, and Lane Schwartz. 2016. Memory-bounded left-corner unsupervised grammar induction on child-directed input. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 964–975, Osaka, Japan. The COLING 2016 Organizing Committee. 6261 Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A minimal span-based neural constituency parser. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 818–827, Vancouver, Canada. Association for Computational Linguistics. Marten van Schijndel, Andy Exley, and William Schuler. 2013. A model of language processing as hierarchic sequential prediction. Topics in Cognitive Science, 5(3):522–540. David Vilares, Mostafa Abdou, and Anders Søgaard. 2019. Better, faster, stronger sequence tagging constituent parsers. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3372–3383, Minneapolis, Minnesota. Association for Computational Linguistics. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a Foreign Language. In Advances in Neural Information Processing Systems 28, pages 2755– 2763. Curran Associates, Inc. Daniel H. Younger. 1967. Recognition and parsing of context-free languages in time n3. Information and control, 10(2):189–208. Yue Zhang and Stephen Clark. 2009. Transition-based parsing of the Chinese treebank using a global discriminative model. In Proceedings of the 11th International Conference on Parsing Technologies (IWPT’09), pages 162–171, Paris, France. Association for Computational Linguistics. Junru Zhou and Hai Zhao. 2019. Head-driven phrase structure grammar parsing on Penn treebank. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2396–2408, Florence, Italy. Association for Computational Linguistics. Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shiftreduce constituent parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 434–443, Sofia, Bulgaria. Association for Computational Linguistics.
2020
557
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6262–6267 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6262 Are we Estimating or Guesstimating Translation Quality? Shuo Sun∗ Johns Hopkins University [email protected] Francisco Guzm´an Facebook AI [email protected] Lucia Specia Imperial College London [email protected] Abstract Recent advances in pre-trained multilingual language models lead to state-of-the-art results on the task of quality estimation (QE) for machine translation. A carefully engineered ensemble of such models won the QE shared task at WMT19. Our in-depth analysis, however, shows that the success of using pre-trained language models for QE is overestimated due to three issues we observed in current QE datasets: (i) The distributions of quality scores are imbalanced and skewed towards good quality scores; (ii) QE models can perform well on these datasets while looking at only source or translated sentences; (iii) They contain statistical artifacts that correlate well with human-annotated QE labels. Our findings suggest that although QE models might capture fluency of translated sentences and complexity of source sentences, they cannot model adequacy of translations effectively. 1 Introduction Quality Estimation (QE) (Blatz et al., 2004; Specia et al., 2009) for machine translation is an important task that has been gaining interest over the years. Formally, given a source sentence, s and a translated sentence, t = φ(s) where φ is a machine translation system, the goal of QE is to learn a function f such that f(s, t) returns a score that represents the quality of t, without the need to rely on reference translations. QE has many useful applications: QE systems trained to estimate Human-mediated Translation Error Rate (HTER) (Snover et al., 2006) can automatically identify and filter bad translations, thereby reducing costs and human post-editing efforts. Industry players use QE systems to evaluate translation systems deployed in real-world applications. Finally, QE can also be used as a feed∗Work done when Shuo Sun was an intern at Facebook. back mechanism for end-users who cannot read the source language. Recently, language models pre-trained on large amounts of text documents lead to significant improvements on many natural language processing tasks. For instance, an ensemble of multilingual BERT (Devlin et al., 2019) and XLM (Conneau and Lample, 2019) models (Kepler et al., 2019a) won the QE shared task at the Workshop on Statistical Machine Translation (WMT19) (Fonseca et al., 2019), outperforming the baseline neural QE system (Kepler et al., 2019b) by 42.9% and 127.7% on the English-German and EnglishRussian sentence-level QE tasks respectively. While pre-trained language models contribute to tremendous improvements on publicly available benchmark datasets, such increases in performance beg the question: Are we really learning to estimate translation quality? Or are we just guessing the quality of the test sets? We performed a careful analysis which reveals that the latter is happening, given several issues with QE datasets which undermine the apparent success on this task: (i) The distributions of quality scores in the datasets are imbalanced and skewed towards highquality translations. (ii) The datasets suffer from the partial-input baseline problem (Poliak et al., 2018; Feng et al., 2019) where QE systems can still perform well while ingesting only source or translated sentences. (iii) The datasets contain domain-specific lexical artifacts that correlate well with human judgment scores. Our results show that although QE systems trained on these datasets can capture fluency of the target sentences and complexity of the source sentences, they over-leverage lexical artifacts instead of modeling adequacy. From these findings, we conclude that QE models cannot generalize, and the successes in this task are over-estimated. 6263 2 Methodology In this paper, we analyze three different instances of sample bias that are prevalent in QE datasets, which affect the generalization that models trained on them can achieve. Lack of label diversity With the advent of NMT models, we have seen an increase in the quality of translation systems. As a result, a random sample of translations might have few examples with lowquality scores. Systems trained on imbalanced datasets and tested on similar distributions can get away with low error rates without paying much attention to samples with bad quality scores. To detect these issues, we analyze the labels and predicted score distributions for several models. Lack of representative samples We want to have datasets that adequately represent both the fluency and adequacy aspects of translation. QE datasets should have a mixture of instances that model both high and low adequacy irrespective of the fluency. To evaluate if our models learn both aspects of translation quality, we run partial input experiments, where we train systems with only the source or target sentences and analyze the discrepancies w.r.t to the full-input experiments. Lack of lexical diversity Most QE datasets come from a single domain (e.g., IT, life sciences), and certain lexical items can be associated with high-quality translations. Lexical artifacts are also observed in monolingual datasets across different tasks (Goyal et al., 2017; Jia and Liang, 2017; Kaushik and Lipton, 2018). For example, Gururangan et al. (2018) find that annotators are responsible for introducing lexical artifacts into some natural language inference datasets because they adopt heuristics to generate plausible hypothesis during annotation quickly. Here, we use Normalized Pointwise Mutual Information (NPMI) (Bouma, 2009) to find possible lexical artifacts associated with different levels of HTER. 2.1 Experimental Setup We experiment with recent QE datasets from WMT18 and WMT19. For every dataset, a Statistical Machine Translation (SMT) system or Neural Machine Translation (NMT) system was used to translate the source sentences. The translated sentences were then post-edited by professional translators. HTER scores between translated sentences and post-edited sentences were calculated with the TER1 tool and clipped to the range [0, 1]. HTER score of 0 means the translated sentence is perfect, while 1 means the translated sentence requires complete post-editing. Since the test sets for WMT18 are not publicly available, we randomly shuffled those datasets into train, dev, and test splits, following the ratio of approximately 8 to 1 to 1. Table 1 presents statistics of the QE datasets. size (K) Dataset langs dom. syst. train dev test WMT18∗ en-de IT SMT 21.8 2.7 2.7 IT NMT 11.5 1.4 1.4 en-cs IT SMT 33.0 4.1 4.1 en-lv SCI SMT 9.8 1.2 1.2 SCI NMT 11.1 1.3 1.3 de-en SCI SMT 21.6 2.7 2.7 WMT19 en-de IT NMT 13.4 1.0 1.0 en-ru Tech NMT 15.0 1.0 1.0 Table 1: Statistics of QE datasets. WMT18∗contains random splits of the publicly available training data since the official test sets are not publicly available. 2.2 Models BERT We experiment with a strong neural QE approach based on BERT (Devlin et al., 2019). In particular, we focus on the bert-base-cased version of the multilingual BERT.2 We join the source and translated sentences together using the special SEP token and predict the QE score from the vector representation of the final CLS token via a Multilayer Perceptron (MLP) layer. Our models perform competitively to the state-of-theart QE models (Kepler et al., 2019a; Kim et al., 2019). However, we do not treat this as a multitask learning problem where word-level labels are also needed because this is severely limited by the availability of data. We also do not do further optimizations (e.g. model ensembling) given that our focus is on what can be learned with the current data, and not maximizing performance. Our simpler models allow us to carefully analyze and determine the effects of source and translated sentences on the performance of the models. We expect the trends to be the same as other neural QE models. 1http://www.umiacs.umd.edu/ snover/terp/ 2https://github.com/google-research/bert 6264 QUEST We also trained and evaluated SVM regression models over 17 baseline features highly relevant to the QE task (Specia et al., 2013, 2015). 3 Results and Recommendations 3.1 Imbalanced datasets Figure 1 presents the distributions of HTER scores for QE datasets from WMT18 and WMT19. Figure 1: Histograms of HTER scores. The distributions of quality scores are skewed towards zero, i.e. most of the translated sentences require few or no post-editing. This phenomenon is especially true for the WMT19 datasets, which are exclusively NMT-based, and for which the majority of the translated sentences have HTER scores of less than 0.1. When we examine the estimations from our QE models, we find that they rarely output values above 0.3, which implies that these models fail to capture sentences with lowquality scores. For example, 15.8% of the samples from the WMT19 En-De test set have HTER scores above 0.3, yet a BERT QE model outputs scores above 0.3 for only 14.5% of those samples. In fact, our BERT model predicts scores above 0.3 for only 2.3% of the whole test set. This defeats the purpose of QE, especially when the objective of QE is to identity unsatisfactory translations. Recommendation: To alleviate this issue, we recommend that QE datasets are balanced by design and that they include high-, medium- and low-quality translations. One way to ensure this would be to include models with different levels of quality. 3.2 Lexical artifacts Table 3 shows some examples of the domainspecific lexical artifacts we found in en-de and encs datasets, although other datasets exhibit similar issues. Around 37% of translated sentences in En-De datasets contain the double inverted comma, and more than 70% of these sentences require little to no post-editing. A QE system can get strong performance simply by associating any translated sentences containing double inverted commas with low HTER scores. These lexical artifacts are introduced when the lack of diversity in labels interacts with a lack of diversity in vocabulary and sentences. For example, the En-De dataset, which was sampled from an IT manual, contains many repetitive sentences similar to “Click X to go to Y”. Recommendation: We can mitigate this problem by sampling source sentences from various documents across multiple domains. 3.3 Partial-input hypothesis In principle, a QE system should predict the quality of a translation given: (i) its closeness to the source text, and (ii) how well it fits in the target language. Here, we present results from training and testing systems under partial-input conditions, where either the source or the translation are used to make predictions. In Table 2 we report the average Pearson correlation over five different training runs of the same model. We observe that QE systems trained on partial inputs perform as well as systems trained on the full input. This is especially true for the target-only systems that use BERT: they achieve 90% or more of the full-input performance on five out of eight test sets. Similarly, source-only QE systems consistently perform at a correlation of 0.4 or more. The partial-input problem is less pronounced for the feature-based SVM models, where the high performance happens in one case. The partial-input baseline problem was also reported by the top-performing QE system from WMT19 (Kepler et al., 2019a). There, the best re6265 Dataset langs syst SVM + 17 features BERT ρ src (%) tgt (%) ρ src (%) tgt (%) WMT18∗ de-en SMT 0.342 62.3% 57.6% 0.697 62.0% 81.2% en-cs SMT 0.398 57.3% 79.9% 0.609 88.2% 96.1% en-de NMT 0.290 63.4% 78.6% 0.456 92.5% 88.4% SMT 0.326 113.2% 100.0% 0.597 71.2% 100.3% en-lv NMT 0.273 52.4% 60.8% 0.621 68.8% 77.3% SMT 0.311 38.6% 51.5% 0.509 82.5% 93.9% WMT19 en-de NMT 0.423 94.6% 90.5% en-ru NMT 0.439 75.2% 95.9% Table 2: Pearson correlation (ρ) between predictions from various QE models and gold HTER labels, and the percentage of performance obtained by presenting the model with partial input from only the source (src) or target (tgt) sentences. In bold we highlight instances with higher than 85% performance. Results for QUEST with the WMT19 data are omitted as feature sets for those datasets are not publicly available. Dataset markers prev. (%) H<0.1 (%) WMT18/19 en-de ” 37.1 73.6 > 7.1 88.8 w¨ahlen 21.1 78.0 klicken 13.2 82.8 WMT18 en-cs gt 4.8 43.2 &amp; 4.8 43.0 go 5.8 22.9 www 0.8 43.9 Table 3: Top 4 lexical items ranked by NPMI for HTER in the range [0.0 - 0.1) and the prevalence % of sentences containing these words and with HTER (H) score of less than 0.1. sults on the word-level QE task were obtained by ignoring the source sentences when making predictions on translated sentences and vice versa. The strong performances on partial-inputs show that these datasets are cheatable, and QE systems trained on them would not generalize well (Feng et al., 2019). Recommendation: When designing and annotating QE datasets, we suggest using a metric that intrinsically represents both fluency and adequacy as labels, such as direct assessments (Graham, 2015) and ensure we have enough representation instances with high and low adequacy and fluency. 4 Discussion Our results suggest that source sentences or translated sentences alone might already contain cues that correlate well with human-annotated scores in the QE datasets. Given this, it seems highly unlikely that these QE models can capture interDataset langs syst. ρtest ρadv WMT18∗ en-de SMT 0.597 0.030 NMT 0.456 -0.017 en-cs SMT 0.609 0.047 en-lv SMT 0.509 0.012 NMT 0.621 0.030 de-en SMT 0.697 0.014 WMT19 en-de NMT 0.423 0.002 en-ru NMT 0.439 -0.036 Table 4: Pearson correlations on the original test sets (ρtest) and adversarial test sets (ρadv) for the BERTbased models. dependencies between source and translated sentences, which usually requires several levels of linguistic analysis. We hypothesize that QE models rely on either the complexity of source sentences or the fluency of translated sentences, but not on adequacy, to make their predictions. To test this, we create adversarial test sets across all language directions by randomly shuffling all source sentences and changing the HTER scores to 1.0. A good model should be able to assign high HTER scores to mismatched pairs. In Table 4, we show the Pearson correlations on the adversarial sets. As expected, our QE models perform poorly, getting correlations close to zero. The results confirm our suspicion: systems trained on these datasets fail to model adequacy. They assign high scores to fluent translations or source sentences with low complexity, regardless of whether these translated sentences are semantically related to their corresponding source or translated sentences. 6266 5 Conclusions and future work In this work, we presented our analysis of QE datasets used in recent evaluation campaigns. Although recent advances in pre-trained multilingual language models significantly improve performances on these benchmark QE datasets, we highlight several instances of sampling bias embedded in the QE datasets which undermine the apparent successes of modern QE models. We identified (i) issues with the balance between highand low- quality instances (ii) issues with the lexical variety of the test sets and (iii) the lack of robustness to partial input. For each of these problems, we proposed recommendations. Upon the submission of this paper, we implemented the proposed recommendations by creating a new dataset for quality estimation that addresses the limitations in current datasets. We collected data for six language pairs, namely two high-resource languages (English–German and English–Chinese), two medium–resource languages (Romanian–English and Estonian–English), and two low-resource languages (Sinhala–English and Nepali–English). Each language pair contains 10,000 sentences extracted from Wikipedia and translated by stateof-the-art neural models, manually annotated for quality with direct assessment (0-100) by multiple annotators following industry standards for quality control. Figure 2: Histograms of DA scores in MLQE dataset for translations into/out of English (en) from/to Romanian (ro), Nepali (ne), Estonian (et), Sinhala (si), Chinese (zh) and German (de). Improving label diversity We selected language pairs with varying degrees of resource availability, which led to more diverse translation quality distributions (particularly for the mediumresource languages), mitigating the issue of imbalanced datasets, as shown in Figure 2. Improving lexical diversity We sampled sentences from a diverse set of topics from Wikipedia, which led to a more diverse vocabulary. Now, the average type-token ratio (TTR) for the English sentences in this set is 0.166, which is a 417% increase from the average TTR of the QE dataset from WMT18 and a 259% increase from the average TTR of the QE dataset from WMT19. Improving representatation This dataset is based on direct assessment, which balances between adequacy and fluency. Hopefully, this will mitigate the problems associated with partialinputs by having more instances with high fluency but low adequacy. In Figure 3, we show one of such examples. Figure 3: An English-Chinese sentence pair from the MLQE dataset. The translation is fluent but inadequate since the final token is mistranslated to statue instead of figurehead, changing the original meaning. Our annotators collectively assigned it a low score of 24%. However, HTER would miss-classify it as a good translation since there is only one token that requires post-editing. This dataset, named MLQE, has been released to the research community3 and will be used for the WMT20 shared task on Quality Estimation.4 In future work, we will test the partial input hypothesis on this data. We hope it will be useful for general research in QE towards more reliable models. 3https://github.com/facebookresearch/mlqe 4http://www.statmt.org/wmt20/quality-estimationtask.html 6267 References John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto Sanchis, and Nicola Ueffing. 2004. Confidence estimation for machine translation. In Coling 2004: Proceedings of the 20th international conference on computational linguistics, pages 315–321. Gerlof Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. In Proceedings of GSCL (2009), pages 31–40. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems, pages 7057–7067. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Shi Feng, Eric Wallace, and Jordan Boyd-Graber. 2019. Misleading failures of partial-input baselines. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5533– 5538. Erick Fonseca, Lisa Yankovskaya, Andr´e FT Martins, Mark Fishel, and Christian Federmann. 2019. Findings of the wmt 2019 shared tasks on quality estimation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 1–10. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6904–6913. Yvette Graham. 2015. Improving evaluation of machine translation quality estimation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1804–1813, Beijing, China. Association for Computational Linguistics. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2021–2031. Divyansh Kaushik and Zachary C Lipton. 2018. How much reading does reading comprehension require? a critical investigation of popular benchmarks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5010–5015. Fabio Kepler, Jonay Tr´enous, Marcos Treviso, Miguel Vera, Ant´onio G´ois, M Amin Farajian, Ant´onio V Lopes, and Andr´e FT Martins. 2019a. Unbabel’s participation in the wmt19 translation quality estimation shared task. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 78–84. Fabio Kepler, Jonay Tr´enous, Marcos Treviso, Miguel Vera, and Andr´e FT Martins. 2019b. Openkiwi: An open source framework for quality estimation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 117–122. Hyun Kim, Joon-Ho Lim, Hyun-Ki Kim, and SeungHoon Na. 2019. Qe bert: Bilingual bert using multi-task learning for neural quality estimation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 85–89. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 180–191. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine translation in the Americas. Vol. 200. No. 6. 2006. Lucia Specia, Gustavo Paetzold, and Carolina Scarton. 2015. Multi-level translation quality prediction with quest++. In ACL-IJCNLP 2015 System Demonstrations, pages 115–120, Beijing, China. Lucia Specia, Kashif Shah, Jos´e GC De Souza, and Trevor Cohn. 2013. Quest-a translation quality estimation framework. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 79–84. Lucia Specia, Marco Turchi, Nicola Cancedda, Marc Dymetman, and Nello Cristianini. 2009. Estimating the sentence-level quality of machine translation systems. In 13th Conference of the European Association for Machine Translation., pages 28–37.
2020
558
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6268–6281 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6268 Language (Re)modelling: Towards Embodied Language Understanding Ronen Tamari† Chen Shani† Tom Hope⋆∗ Miriam R. L. Petruck‡ Omri Abend† Dafna Shahaf† †The Hebrew University of Jerusalem ⋆Allen Institute for Artificial Intelligence ∗Paul G. Allen School of Computer Science & Engineering, University of Washington ‡International Computer Science Institute, Berkeley, CA {ronent,chenxshani,oabend,dshahaf}@cs.huji.ac.il [email protected] [email protected] Abstract While natural language understanding (NLU) is advancing rapidly, today’s technology differs from human-like language understanding in fundamental ways, notably in its inferior efficiency, interpretability, and generalization. This work proposes an approach to representation and learning based on the tenets of embodied cognitive linguistics (ECL). According to ECL, natural language is inherently executable (like programming languages), driven by mental simulation and metaphoric mappings over hierarchical compositions of structures and schemata learned through embodied interaction. This position paper argues that the use of grounding by metaphoric inference and simulation will greatly benefit NLU systems, and proposes a system architecture along with a roadmap towards realizing this vision. 1 Introduction “Not those speaking the same language, but those sharing the same feeling understand each other.” – Jalal ad-Din Rumi While current NLU systems “speak” human language by learning strong statistical models, they do not possess anything like the rich mental representations that people utilize for language understanding. Indeed, despite the tremendous progress in NLU, recent work shows that today’s state-ofthe-art (SOTA) systems differ from human-like language understanding in crucial ways, in particular in their generalization, grounding, reasoning, and explainability capabilities (Glockner et al., 2018; McCoy et al., 2019a,b; Nie et al., 2019; Yogatama et al., 2019; Lake et al., 2019). Question-answering (QA) is currently one of the predominant methods of training deep-learning models for general, open-domain language understanding (Gardner et al., 2019b). While QA is a versatile, broadly-applicable framework, recent studies have shown it to be fraught with pitfalls (Gardner et al., 2019a; Mudrakarta et al., 2018). A recent workshop on QA for reading comprehension suggested that “There is growing realization that the traditional supervised learning paradigm is broken [...] – we’re fitting artifacts” (Gardner, 2019). In many respects, the problems of NLU mirror those of artificial intelligence (AI) research in general. Lake et al.’s (2017a) seminal work identified a significant common factor at the root of problems in general AI. The current deep-learning paradigm is a statistical pattern-recognition approach predominantly applied to relatively narrow task-specific prediction. In contrast, human cognition supports a wide range of inferences (planning, action, explaining, etc.), hinting at a view of intelligence focused on model-building, specifically, mental models: rich, structured, manipulable, and explainable representations useful for performing in dynamic, uncertain environments. This distinction motivates the quest for a new cognitively-inspired modelbuilding learning paradigm for general AI, which has inspired fruitful subsequent research and discussion (e.g., Lake et al. (2017b)). The observation that NLU and general AI share a common central problem (task-specific predictionbased learning), and the growing realization that deeper text understanding requires building mental models (Gardner et al., 2019a; Forbes et al., 2019), motivate the search for an NLU analog of the cognitively-inspired model building paradigm. Amid recent position papers highlighting significant differences between human language understanding and current NLU systems (McClelland et al., 2019; Bisk et al., 2020), here we take a more focused look at mental models; challenges arising due to their embodied nature, their importance in general NLU, and how we might begin integrating them into current approaches. 6269 Mainstream NLU work, be it entirely distributional, such as BERT (Devlin et al., 2019), or also involving symbolic knowledge representation (Liu et al., 2019a; Bosselut et al., 2019), seldom addresses mental models directly. Crucially, such approaches lack the interactive worlds within which mental models1 are learned jointly through language and embodied action. The most closely related lines of work to the present proposal are grounded approaches, which feature worlds in the form of interactive environments, and address mapping text to programs (executable semantic parses) (e.g., Gauthier and Mordatch, 2016; Liang, 2016; Kiela et al., 2016; Chevalier-Boisvert et al., 2019). However, while well-aligned with a model-building paradigm, typically such approaches have been limited to short or synthetic literal language and narrow domains assuming predefined environments. Embodied approaches to general NLU, as advocated here, are few and far between. Mostly, examples fall under the construction grammar framework (Steels and de Beule, 2006; Bergen and Chang, 2005). However, despite their intellectual merit, they were not operationalized to scale readily for mainstream applications (see §3). This position paper argues that executable semantic parsing and grounded approaches to NLU constitute a first step in a much larger program, whose outline is set forth, for general language understanding through embodied cognitive linguistics (ECL). Following much cognitive science research (see §3, §4), this paper posits that (1) execution or simulation is a central part of semantics, essential for addressing some of the persistent difficulties in text understanding, and (2) metaphoric inference capabilities are central to knowledge representation, and facilitate grounded understanding of general language. Importantly, capacities for both simulation and metaphor are emergent, borne of embodied interaction within an external world. Our contributions are: we analyze inherent limitations of SOTA statistical language models applied to NLU and propose a framework to address these limitations. The novelty of this approach stems from bringing together ideas from the cognitive science literature, the wider AI community, and NLU. This framework constitutes a path to generalize current execution-based methods towards more general language understanding. 1Typically, mental models are construed as “world simulators”; see §3. The world contains 2 crates. Each crate contains 4 boxes. Oranges and apples are objects. Each box may contain up to 5 objects. Objects can be moved from one box to another. Objects can be removed from boxes or crates. There are two apples in the first box in the first crate. There is one orange and one apple in the second box of the second crate. First, the apples were transfered from the first box of the first crate to the first box of the second crate. Next, all apples were removed from the second crate. Initial World State C1 C2 Figure 1: Open-domain challenge – a world with boxes, crates and objects. This paper proposes a system architecture and a roadmap towards implementing the vision outlined here, suggesting preliminary directions for future work (learned world models, incorporating interaction into datasets). We believe that this framework will facilitate consolidation with multiple related lines of research across the different communities, particularly embodied AI and NLU (Luketina et al., 2019). 2 Challenges for Current NLU Systems This section presents concrete example problems demonstrating inherent limitations in SOTA NLU. 2.1 Open-domain Literal Language Simulation Fig. 1 includes a short story about a world with crates, boxes, and objects inside them. It is a short and simple narrative, far from capturing the fullblown complexity of natural language. Following Gardner et al. (2019a), we assume that a system understands the story if it can correctly answer arbitrary questions about it. To do so requires basic commonsense and mathematical reasoning, referent grounding, tracking events, handling declarative knowledge, and more. The task is similar to narrative comprehension tasks in datasets such as bAbI (Bordes et al., 2015) and SCONE (Long et al., 2016), and could be solved given large amounts of annotated training data. But, the goal here is different, specifically, to develop models that, like humans, can understand such language on-the-fly (like zero-shot learning). QA approaches. Current QA systems, used in an off-the-shelf manner, do not generalize well to tasks on which they have not been trained; NLU models are known to be brittle even to slight 6270 changes in style and vocabulary (Gardner et al., 2020; Keysers et al., 2020). The closest QA setting is the DROP challenge (Dua et al., 2019), requiring reading comprehension and basic numerical reasoning over paragraphs. As a simple sanity check, we tested a near-SOTA model and baseline2 on this example, asking questions about the initial and final state. The models were notably better answering questions about the initial state than about the final state. This result is perhaps expected, as the answers to questions about the initial state are closer to the input text. Answering questions about later states is more challenging. A key missing component of these systems is the ability to simulate the effects of actions, especially commonsense effects (e.g., moving a container moves the elements in it). Executable semantic parsing approaches. The problem of Fig. 1 could also naturally be cast as an executable semantic parsing (ex. SP) task. Similar tasks already exist, for example, the “Alchemy” sub-task of the SCONE dataset features beakers of chemicals that are mixed, poured, and drained. Executable approaches can leverage simulation to learn structured world models, but are limited by hardcoded, domain-specific executors; adding tasks requires substantial manual effort. For humans, through largely subconscious metaphorical inference (related to transfer and meta-learning in general AI (Lake et al., 2017a)), it is obvious that both SCONE and Fig. 1 share much the same structure. This similarity allows for effortless generalization, effectively re-purposing a relatively simple executor (for literal language) flexibly across many tasks. 2.2 Non-literal Language The previous challenge involved literal language, amenable to symbolic execution. However, non-literal language is pervasive in everyday speech (Lakoff and Johnson, 1980). Consider the example in Fig. 2: the phrase “head of the French Army” is non-literal, implying that the army can be treated as a human body. The execution semantics of verbs like “attacked” and “defend” are also non-literal; they are highly contextual, requiring interpretation beyond word-sense disambiguation alone. “Russian hackers attacked the Pentagon networks” or “The senator attacked the media” entail very different simulations. This ambiguity is challenging for non-neural (symbolic) simulation2Segal et al. (2019) and Dua et al. (2019), respectively. COUNTER FORCE French Army Napoleon HEAD of Attack FORCE, MOTION Fort BODY LOCATION "Napoleon, the head of the French Army, attacked the Russian fort,      but found it well defended and had to turn back." Russian Army HEAD of French Army Russian Army Fort BODY ABORTED ACTION Napoleon HEAD of Defend French Army Russian Army Fort BODY 1 2 3 1 2 3 LOCATION LOCATION Napoleon Figure 2: Non-literal language challenge. To understand this sentence, humans rely on metaphoric inference over embodied concepts (in blue, also called schema; see §3). For example, here “attack” evokes a FORCE or MOTION schema, used to construct a mental model of the scene via mental simulation (§4). based approaches. Humans compose a structured mental model from the language through schemata and mental simulation, as discussed in §3,§4. To summarize, the limitations outlined above motivate the attempt to extend the capability of simulation to general linguistic inputs. Doing so would enable the construction of grounded, manipulable, and interpretable representations from text. Two desiderata follow from the challenges: (1) more flexible utilization of symbolic executors by exploiting shared (analogical) structures between texts (§2.1), and (2) learned, neural executors for non-literal language comprehension (§2.2). 3 Embodied Cognitive Linguistics: A Model Building Paradigm Turning to cognitive science for inspiration, we focus on embodied cognitive linguistics (ECL), an important paradigm directly addressing both desiderata. This section presents a brief overview and key tenets of ECL, specifically the theoretical foundations Lakoff and Johnson (1980) and Feldman and Narayanan (2004) developed. Most contemporary cognitive accounts of language incorporate concepts from ECL to some degree. A full review is out of scope of this work; see G¨ardenfors (2014) and §4,§5 for discussion in the NLU context. 6271 Early cognitive theories assumed a disembodied, symbolic representation of knowledge (Lewis, 1976; Kintsch and Van Dijk, 1978), separate from the brain’s modal systems (vision, motor control, etc.). In contrast, the embodied cognition (EC) view, based on widespread empirical findings, focuses on the role of the body in cognition. In this view, knowledge is stored using multimodal representations (mental imagery, memories, etc.) that arise from embodied experience and action in the world (Barsalou, 2008; Proffitt, 2006). ECL postulates that linguistic representations and other, higher-level cognitive functions are deeply grounded in neural modal systems (Lakoff and Johnson, 1980; Barsalou, 2008). This view is compelling, as it addresses the grounding problem (Harnad, 1990) by linking between high-level symbolic constituents of mental representations and experience or action in the physical world (Varela et al., 2017). Note that embodiment is far from an end-all for language comprehension: for example, social and cultural aspects too are crucial (Arbib et al., 2014). Still, ECL laid important conceptual foundations also underlying subsequent accounts: • Embodied schemata: Pre-linguistic structures formed from bodily interactions and recurring experience, such as CONTAINMENT, PARTWHOLE, FORCE, MOVEMENT (Langacker, 1987; Talmy, 1985, 1983). • Metaphoric inference:3 The process by which new information may be inferred via structural similarities to a better-understood instantiated system (Lakoff and Johnson, 1980; Gallese and Lakoff, 2005; Day and Gentner, 2007). For example, “I have an example IN mind” suggests that the abstract concept mind is mapped to the more concrete domain of containers. • Mental simulation. The reenactment of perceptual, motor, and introspective states acquired during experience with the world, body, and mind. In EC, diverse simulation mechanisms (also called mental or forward models (Rumlehart et al., 1986; Grush, 2004)) support a wide spectrum of cognitive activities, including language and decision making (Barsalou, 2008). We believe that ECL is a useful paradigm for addressing the challenges of §2, as it articulates the role of analogy and mental simulation in NLU. The following two ECL hypotheses summarize 3Also called analogical reasoning, we use “metaphorical” and “analogical” interchangeably. them (Lakoff and Johnson, 1980; Feldman and Narayanan, 2004): Hypothesis 1 (Simulation): Humans understand the meaning of language by mentally simulating its content. Language in context evokes a simulation structured by embodied schemata and metaphoric mappings, utilizing the same neural structures for action and perception in the environment. Understanding involves inferring and running the best fitting simulation. Hypothesis 2 (Metaphoric Representation): Human concepts are expressible through hierarchical, compositional, metaphoric mappings over a limited vocabulary of embodied schema. Abstract concepts are expressed using more literal concepts. Early ECL Implementations. Early attempts to implement ECL in actual language understanding systems were founded on Narayanan (1997)’s x-schema simulation framework and Embodied Construction Grammar (Bergen and Chang, 2005). While notable for approaching challenging problems involving mental simulation, and complex, metaphoric language, early implementation efforts were not operationalized to scale to mainstream applications (Lakoff and Narayanan, 2010). These works also focused on a particular type of simulation (sensorimotor), understood as only one mechanism of many used in language understanding (Stolk et al., 2016). FrameNet (Ruppenhofer et al., 2016) and MetaNet (David and Dodge, 2014) are closely related projects in that each provides an extensive collection of schemata used in everyday and metaphoric language comprehension, respectively, via the concept of a semantic frame (Fillmore, 1985). However, neither incorporates simulation semantics, as needed for a full realization of the ECL vision (Chang et al., 2002). 4 Linking ECL to NLU and Embodied AI Research We propose a unifying view of ECL, bringing it closer to contemporary cognitive science and deep learning approaches. This section presents notations and motivating intuitions, further developing the computational framework in §5,§6. The proposal centers around the view of natural language as a kind of neural programming language (Lupyan and Bergen, 2016), or higher-level cognitive control system for systematically querying and induc6272 Concept Symbolic ECL Embodied AI Primitives Basic data structures, operators, variables... Schemata: MOVE, CONTAINER, PART-WHOLE... Deep neural world & action representations (learned through interaction) Knowledge Organization a) Composition, inheritence b) Libraries a) Hierarchical, compositional metaphoric mappings b) Compiled Knowledge Executable Unit Instruction Semantic parse ˜a Execution Trace Intermediate program states Mental models ˜T (˜s, ˜a) Simulation Executor Emulator† ˜T Semantic parsing / grounding Parser to executable symbolic program Parser to executable neural program O−1, π Table 1: Natural language as a neural programming language conceptualization, with correspondence between symbolic programming, ECL, and embodied AI, using standard POMDP notation. Tilde notation refers to internal counterparts of T, s, a used in mental simulation. †Also called mental simulation (Bergen and Chang, 2005), we adopt emulator (Glenberg, 2008) to conform with contemporary cognitive science accounts. ing changes in the mental and physical states of recipients (Elman, 2004; Stolk et al., 2016; Borghi et al., 2018). This approach builds on the ECL hypotheses and suggests a broader view of mental simulation, one that is readily amenable to the same computational formulation as current embodied AI and executable semantic parsing approaches. Preliminaries. At the core of embodied approaches is the Partially Observable Markov Decision Process (POMDP; Kaelbling et al., 1998). It governs the relations between states (s), actions (a), observations (o), and rewards (r). Of particular interest are the recognition O−1 : O →S, policy π : S →A, and transition T : S × A →S functions. Focusing on mental simulation rather than actual external action, we assume a degree of equivalence between external and internal representations (Rumlehart et al., 1986; Hamrick, 2019). We consider internal mental states and actions (˜s, ˜a), effecting change to mental models via a learned neural emulator ˜T (Grush, 2004). Finally, language is considered a form of action (Glenberg, 2008) via external and internal utterances (i.e., semantic parses). Connecting symbolic & embodied language understanding. Table 1 presents a structured version of the neural programming language conceptualization. Importantly, this view highlights the important commonalities and differences between ECL and both symbolic programming languages, as well as embodied neural mechanisms, for perception and action. We illustrate these relations more explicitly through a comparison between ECL and executable semantic parsing (Table 1, bottom). Executable semantic parsing. Involves parsing a novel linguistic input o into a symbolic program a, whose execution4 yields a desired goal state: T O−1 (o) , a  = s∗. Executable semantic parsing focuses on action in an external, symbolic environment T, and typically doesn’t address ˜T, e.g., mapping a natural language question o directly to an executable query a on an SQL engine T. ECL semantic parsing. Shares the same structure as executable semantic parsing, with the important distinction that simulation is enacted via internal neural representations: ˜T O−1 (o) , ˜a  = ˜s∗. The fully neural formulation enables grounded understanding of non-literal language, demonstrated here for the Fig. 2 example. Metaphoric inference (hyp. 2) facilitates parsing a novel linguistic input o into internal, structured, neural state representations ˜s, ˜a. Accordingly, the utterance u=“Napoleon, the head of the French Army” might be parsed to an internal state ˜s composed of a PARTWHOLE schema as shown in the figure. The phrase “attacked the Russian fort” could be grounded to a parse ˜a driving simulation over MOTION and FORCE schemata. The requirement that ˜s and ˜a should afford mental simulation (hyp. 1) by the neural world emulator ˜T marks an important difference from current neural word embeddings, one that contributes to deeper language understanding; in the resulting mental model ˜T (˜s, ˜a), Napoleon and the French Army likely moved together due to the PART-WHOLE relation between them. This inference is non-trivial since it requires implicit 4Slightly abusing notation, we apply T iteratively on a sequence of actions a = (a0, ..., aL−1). 6273 knowledge (heads and bodies often move together). Indeed, a SOTA NLI model5 considers it “very likely” that the Fig. 2 sentence contradicts the entailment that “The French Army moved towards the fort but did not enter it.” To summarize: • Executable semantic parsing approaches address grounding literal language to symbolic primitives; and metaphoric inference suggests a mechanism for grounding general language using neural primitives (schemata). • Executable semantic parsing approaches utilize hard-coded, external symbolic executors, whereas ECL highlights the role of learned neural world emulators, as in current embodied research AI efforts (see §7.2). 5 Proposal for an Embodied Language Understanding Model Formalizing the view characterized above suggests a novel computational model of language understanding. While current statistical models focus on the linguistic signal, research shows that most of the relevant information required for understanding a linguistic message is not present in the words (Stolk et al., 2016; David et al., 2016). Accordingly, the ECL view suggests shifting the focus to the mental models that communicators use, and the neural mechanisms used to construct them, e.g., mental simulation. What follows here adapts a relevant cognitiveinspired framework from general AI to the present NLU setting (§5.1), and discusses computational challenges (§5.2). Note that similar insights have been applied to multi-agent communication problems (Andreas et al., 2017), but their application to general NLU has been limited. 5.1 Formal Framework The recently introduced Consciousness Prior (CP; Bengio, 2017) is a framework to represent the mental model of a single agent, through the notion of abstract state representations.6 Here, an abstract state corresponds with ˜s (§4), a low-dimensional, structured, interpretable state encoding, useful for planning, communication, and predicting upcoming observations (Franc¸ois-Lavet et al., 2019). One example is a dynamic knowledge graph embedding to represent a scene (Kipf et al., 2020). 5We use Liu et al. (2019b) with https://demo. allennlp.org/textual-entailment/. 6For brevity we omit discussion of deriving abstract states from the full mental state, see Bengio (2017) for details. We adapt CP to a two-player cooperative linguistic communication setting (Tomasello, 2008). We assume a communicator (A) and recipient (B), as shown in Fig. 3. The computational problem of communicators is a “meeting of minds” (G¨ardenfors, 2014), or achieving some alignment of their mental models (Rumelhart, 1981; Stolk et al., 2016): the communicator A wishes to induce in B some (possibly ordered) set of goal abstract states G∗. We leave exploration of the communicator side to future work, and focus here on understanding. We assume that A sequentially generates utterances ut ∈U (we assume equivalence between utterances u and observations o) using an utterance model (Bengio, 2017). Analogously, B uses a comprehension model C s.t., ˜st = C (˜st−1, ut). We assume that alignment is possible: there exists some sequence of utterances that will induce G∗. This framework is readily applicable to static text (reading comprehension). For example, in Fig. 1, G∗would be the sequence of desired states, and each sentence corresponds to an utterance (u1 =“The world contains 2 crates.”,...). 5.2 Computational challenges of embodiment We can now more precisely characterize the challenges that the recipient faces. At the root of the problem is the embodiment principle (Lawrence, 2017): human internal representations and computation capacity, as represented by ˜s and ˜T, respectively, are many orders of magnitude larger than their linguistic communication “bandwidth”. We note that though ˜st is only a subspace of the full mental state, following Stolk et al. (2016); Bengio (2017) we assume that it still holds that dim (˜st) ≫dim (ut).The embodiment principle dictates extreme economy in language use (Grice et al., 1975), and results in three major challenges: Common ground (prior world knowledge). Meaning cannot be spelled out in words but rather must be evoked in the listener (Rumelhart, 1981) by assuming and exploiting common ground (Clark and Schaefer, 1989; Tomasello, 2008), i.e., shared structures of mental representations. In other words, to achieve some aligned goal state g∗, the communicators must rely heavily on pre-existing similarities in ˜s, ˜a, and ˜T. Developing computational versions of human world models ( ˜T) is likely AI-complete or close, but useful middle ground may be attained by partial approximations. 6274 Mental Model   Intents   Linguistic Channel Utterance Model  Communicator    World State  Recipient C2 C2 "Remove all apples from the second crate" Comprehension   C1 1 2 3 4 Figure 3: Schema of linguistic communication framework. Communicator’s intent (1) is a high dimensional mental state, i.e., remove apples from the second crate. The low capacity of the linguistic channel (2) leaves the burden of understanding primarily on Communicator and Recipient (embodiment principle). The Recipient’s goal is to understand (3), i.e., reconstruct the intent by integrating linguistic input, knowledge of the state of the world, and internal knowledge (memories, commonsense). Reconstruction results in a successful alignment (4). Common ground (discourse). In the context of discourse, new information must be accumulated efficiently to update the mental model (Clark and Schaefer, 1989; Stolk et al., 2016). Consider “Remove all apples from the second crate” (Figure 1). Full comprehension is only possible in the context of a sufficiently accurate mental model. Using our previous notations, the comprehension of ut depends both on the previous utterances u1:(t−1) and intermediate mental model ˜st−1. Abstract vs. Literal Language. Interpretation of literal language is relatively straightforward – it is the language first acquired by children, directly related to the physical world. However, much of human language is more abstract, relying on metaphors borne of embodiment. The symbolic programming analog fails for utterances like “these elections seem like a circus”. Symbolic programming languages cannot handle nonliteral interpretations: how are elections like a circus? This is related to selective analogical inference (Gentner and Forbus, 2011), closely related to ECL: not everything in the source domain (circus) is mapped to the target (elections). Humans easily perceive the salient metaphoric mappings (clown→candidate), but this feat remains extremely complex for machines. 6 Architecture Sketch This section presents a schematic ECL-inspired architecture towards the implementation of the comprehension model (C), which addresses the challenges presented in §5.2. Fig. 4 shows the proposed architecture. For simplicity, the focus is on a static reading comprehension setting, but the architecture supports richer environments as well. 6.1 Environment The environment provides an “interaction API” to the agent, as well as the reward signal. The supported interaction may vary considerably depending on the task; for reading comprehension, it allows structured access to the text while supporting flexible reading strategies (Yuan et al., 2019). The flexibility is important for long documents, where navigation may be required (Geva and Berant, 2018). For executable semantic parsing, there might be external systems to interact with besides the text, such as a database (Liang et al., 2016). 6.2 Agent The agent architecture approximates the important ECL functions outlined in §4, and consists of four main modules: Memory. We distinguish between two forms of memory, the first an episodic, short-term mental model – the system’s current abstract state representation (˜st). The symbolic programming analog is the execution trace of a program, containing the states of relevant working variables at each execution step. Fig. 4 displays the updated mental model, after the removal of the apples. Compiled knowledge, or long-term memory, reflects highly familiar object representations, behaviors and schemata, such as common sense, intuitive psychology and physics. The symbolic programming language analogs of this are libraries; largely static, hierarchical and compositional repositories of functions and 6275 Emulator Natural Language Environment Agent "Remove all apples from the second crate." C2 Sub-goal 2 Read next sentence Sem. parse                                                Global Memory Mental Model (Short-term) . Compiled Knowledge (Long-term) Action "Library functions"  imports Sub-goal 3 C2 C1 for box in crate2: remove apples from box Parsing: high-level perception, control Sub-goal 1 Figure 4: Architecture for comprehender (§5), demonstrated on a symbolic version of the example task of Fig. 1. The agent receives natural language input from the environment. The agent has global memory – short-term, keeping track of the mental model of the world, and long-term, containing compiled knowledge (“library classes and functions”). The parser interprets input to parse ˜at enacting mental simulation using emulator. The mental model is then updated, ready for the next input. The sub-goals refer to the order in which components are learned (as opposed to hard-coded) in our proposed roadmap (§7). classes. In the course of language interpretation, these libraries are “importable”: for the symbolic example in Fig. 4, the parser might instantiate a new variable of an imported type (e.g., crate2 = Container()). Both types of memory are accessible for all components of the agent. Parser. Abstraction of higher-level perception, control, reasoning and linguistic functions. Handles interpretation of new linguistic inputs based on prior knowledge and the current mental state. Consonant with the view of analogy-making as a kind of higher-level perception or recognition (Mitchell, 1993), metaphoric inference is involved in grounding a novel input ut into internal, neural state representations ˜st, ˜at affording simulation. See Fig. 4 and Fig. 2 for examples on literal and non-literal language, respectively. Emulator. Functionally similar to the executor module in executable semantic parsing, but learned, and obviously far greater in scale. This module is an abstraction of neural emulation mechanisms ( ˜T), representing a wide range of functions, from lower-level motor control and imagery to higher-level models used for planning and theory of mind (Grush, 2004). It operates over the current mental model and semantic parse from the parser. The output is then an updated mental model. Importantly, the proposed architecture is designed to address the challenges outlined in §5.2; compiled knowledge underlies human common ground, the building blocks of ˜s, ˜a and ˜T. Memory and emulation are instrumental for accumulation in discourse. The ability to understand abstract language involves all modules in the system. 7 Implementation Roadmap The architecture outlined in §6 is very ambitious; its implementation requires much further research. This section proposes a roadmap to this goal, identifying three sub-goals (Fig. 4), presented in order of increasing difficulty. Broadly speaking, the level of difficulty is determined by which components are assumed as given in the input (here this also means they are hard-coded in a symbolic programming language), and which must be learned. 7.1 Sub-goal 1: learning open-domain simulation Observing that literal language is close to the embodied primitives level, its interpretation is simpler (than that of non-literal language, see §4). Therefore, in this phase, the emulator and compiled knowledge are hard-coded; here the focus is learning the parser. In other words, this sub-goal focuses on extending executable semantic parsing from relatively narrow domains to handle more general literal language on-the-fly, similarly to zero-shot semantic parsing (Givoli and Reichart, 2019). For the example in §2.1, the parser could be expected to infer the types (boxes as containers, fruits as objects) either by context (Yao et al. (2018) explore a preliminary schema-based approach) or 6276 explicit declarative language, using them to configure the emulator to handle the specific required problem setting (Tamari et al., 2020). As in similar projects exploring embodied understanding (Pustejovsky and Krishnaswamy, 2016; Baldridge et al., 2018), new simulator frameworks must be developed. While full embodiment calls for multiple modalities, the degree to which it is required remains an important open question (Lupyan and Lewis, 2019). Accordingly, and for immediate applicability to purely textual NLU problems we propose also focusing on the simpler setting of interactive text (Nelson, 2005). Recent research on text-based games shows how agents can learn to “program” in such languages (Cˆot´e et al., 2019; Ammanabrolu and Riedl, 2019), and how real language understanding problems can be framed as executable semantic parsing using configurable text-based simulators (Tamari et al., 2019). 7.2 Sub-goal 2: learning to simulate This phase assumes that the compiled knowledge is given (hard-coded), and the parsing and emulator modules are neural (learned). A hard-coded emulator will likely be needed to train a learned emulator. The learned event execution of Narayanan (1997) provides a useful starting point towards computational models capable of such inference. In general, learned simulation is relatively unexplored in the context of natural language, though recent work has explored it in generated instruction following setups (Gaddy and Klein, 2019; Adhikari et al., 2020). Outside of NLU, learning structured world models is a long-studied, fast-growing field in embodied AI research (Schmidhuber, 1990; Ha and Schmidhuber, 2018; Hamrick, 2019; Kipf et al., 2020), and recently also in learned executors for neural programming (Kant, 2018). We expect much useful cross fertilization with these fields. 7.3 Sub-goal 3: learning compiled knowledge This phase focuses on the component seemingly hardest to learn – compiled knowledge. Out of scope here is fully neural setting where all components are jointly learned, as in continual learning research (Parisi et al., 2019). Instead, we focus on a simpler setting, in which the compiled knowledge is learned but represented by symbolic code; i.e., learning the static code library underlying the simulation framework. This sub-goal is relevant for training the parser (§7.1) as well as the emulator (§7.2), and can be pursued in parallel to them. In this setting, learning compiled knowledge is closely related to automated knowledge base construction (Winn et al., 2019) or frame induction from text (QasemiZadeh et al., 2019). Our proposed paradigm suggests enriching classic symbolic knowledge representations (Speer et al., 2017) to executable form (Tamari et al., 2020). Preliminary steps in this direction are seen in inferential knowledge bases such as ATOMIC (Sap et al., 2019), which provides limited execution logic using edges typed with if-then relations. Alongside FrameNet and MetaNet, others have collected schema and metaphor mappings, by learning them from large corpora (Beigman Klebanov et al., 2016; Gao et al., 2018). Pastra et al. (2011) built a database of concepts directly groundable to sensorimotor representations, primarily for robotics applications. 8 Conclusions This position paper has proposed an approach to representation and learning based on the tenets of ECL. The proposed architecture, drawing on contemporary cognitive science, aims to address key limitations of current NLU systems through mental simulation and grounded metaphoric inference. We outlined major challenges and suggested a roadmap towards realizing the proposed vision. Growing empirical evidence shows that language is intricately intertwined with a vast range of other neural processes. Accordingly, this work suggests a symbiotic view of cognitive science, embodied AI, and computational linguistics. By sharing common foundational problems, these fields may better share and co-evolve common solutions. Finally, we believe that attaining deeper language understanding must be a large scale effort, beyond the scope of any one research group. We hope that the paradigm presented here will help provide coherence to such efforts. One of our main goals was to stimulate a discussion; moving forward, we welcome comments, feedback, and suggestions. Acknowledgments We thank the reviewers for their insightful comments. This work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant no. 852686, SIAM) and NSF-BSF grant no. 2017741 (Shahaf), as well as the Israel Science Foundation grant no. 929/17 (Abend). 6277 References Ashutosh Adhikari, Xingdi Yuan, Marc-Alexandre Cˆot´e, Mikul´aˇs Zelinka, Marc-Antoine Rondeau, Romain Laroche, Pascal Poupart, Jian Tang, Adam Trischler, and William L. Hamilton. 2020. Learning dynamic knowledge graphs to generalize on text-based games. Computing Research Repository, arXiv:2002.09127. Prithviraj Ammanabrolu and Mark Riedl. 2019. Playing text-adventure games with graph-based deep reinforcement learning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3557–3565, Minneapolis, Minnesota. Association for Computational Linguistics. Jacob Andreas, Anca Dragan, and Dan Klein. 2017. Translating neuralese. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 232–242, Vancouver, Canada. Association for Computational Linguistics. Michael A. Arbib, Brad Gasser, and Victor Barr`es. 2014. Language is handy but is it embodied? Neuropsychologia, 55(1):57–70. Jason Baldridge, Tania Bedrax-Weiss, Daphne Luong, Srini Narayanan, Bo Pang, Fernando Pereira, Radu Soricut, Michael Tseng, and Yuan Zhang. 2018. Points, paths, and playscapes: Large-scale spatial language understanding tasks set in the real world. In Proceedings of the First International Workshop on Spatial Language Understanding, pages 46–52, New Orleans, Louisiana, USA. Lawrence W. Barsalou. 2008. Grounded Cognition. Annual Review of Psychology, 59(1):617–645. Beata Beigman Klebanov, Chee Wee Leong, E. Dario Gutierrez, Ekaterina Shutova, and Michael Flor. 2016. Semantic classifications for detection of verb metaphors. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 101–106, Berlin, Germany. Association for Computational Linguistics. Yoshua Bengio. 2017. The Consciousness Prior. Computing Research Repository, arXiv:1709.08568. Benjamin K Bergen and Nancy Chang. 2005. Embodied construction grammar in simulation-based language understanding. Construction grammars: Cognitive grounding and theoretical extensions, 3:147. Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. 2020. Experience grounds language. Computing Research Repository, arXiv:2004.10151. Antoine Bordes, Jason Weston, Sumit Chopra, Tomas Mikolov, Arman Joulin, S Rush, and L Bottou. 2015. Artificial tasks for artificial intelligence. Facebook AI Research. ICLR–San Diego–May, 7:2015. Anna M. Borghi, Laura Barca, Ferdinand Binkofski, Cristiano Castelfranchi, Giovanni Pezzulo, and Luca Tummolini. 2018. Words as social tools: Language, sociality and inner grounding in abstract concepts. Physics of Life Reviews, 29:120–153. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762–4779, Florence, Italy. Association for Computational Linguistics. Nancy Chang, Srini Narayanan, and Miriam R.L. Petruck. 2002. Putting frames in perspective. In COLING 2002: The 19th International Conference on Computational Linguistics. Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Saharia, Thien Huu Nguyen, and Yoshua Bengio. 2019. BabyAI: First steps towards grounded language learning with a human in the loop. In International Conference on Learning Representations. Herbert H Clark and Edward F Schaefer. 1989. Contributing to discourse. Cognitive science, 13(2):259– 294. Marc-Alexandre Cˆot´e, ´Akos K´ad´ar, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Matthew Hausknecht, Layla El Asri, Mahmoud Adada, and et al. 2019. Textworld: A learning environment for text-based games. Computer Games, page 41–75. Oana David and Ellen Dodge. 2014. Building the metanet metaphor repository: The natural symbiosis of metaphor analysis and construction grammar. In The 8th International Construction Grammar Conference (ICCG 8), 3-6 September, Osnarbr¨uck, Germany. Oana David, George Lakoff, and Elise Stickles. 2016. Cascades in metaphor and grammar. Constructions and Frames, 8(2):214–255. Samuel B. Day and Dedre Gentner. 2007. Nonintentional analogical inference in text comprehension. Memory & Cognition, 35(1):39–49. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. 6278 Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368–2378, Minneapolis, Minnesota. Association for Computational Linguistics. Jeffrey L Elman. 2004. An alternative view of the mental lexicon. Trends in cognitive sciences, 8(7):301– 306. Jerome Feldman and Srinivas Narayanan. 2004. Embodied meaning in a neural theory of language. Brain and Language, 89(2):385–392. Charles J Fillmore. 1985. Frames and the semantics of understanding. Quaderni di semantica, 6(2):222– 254. Maxwell Forbes, Ari Holtzman, and Yejin Choi. 2019. Do neural language representations learn physical commonsense? Proceedings of the 41st Annual Conference of the Cognitive Science Society. Vincent Franc¸ois-Lavet, Yoshua Bengio, Doina Precup, and Joelle Pineau. 2019. Combined reinforcement learning via abstract representations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3582–3589. David Gaddy and Dan Klein. 2019. Pre-learning environment representations for data-efficient neural instruction following. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1946–1956, Florence, Italy. Association for Computational Linguistics. Vittorio Gallese and George Lakoff. 2005. The brain’s concepts: The role of the sensory-motor system in conceptual knowledge. Cognitive Neuropsychology, 22(3-4):455–479. Ge Gao, Eunsol Choi, Yejin Choi, and Luke Zettlemoyer. 2018. Neural metaphor detection in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 607–613. P. G¨ardenfors. 2014. The Geometry of Meaning: Semantics Based on Conceptual Spaces. The MIT Press. MIT Press. Matt Gardner. 2019. How will we know when machines can read? Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating nlp models via contrast sets. Computing Research Repository, arXiv:2004.02709. Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, and Sewon Min. 2019a. On Making Reading Comprehension More Comprehensive. Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, and Sewon Min. 2019b. Question answering is a format; when is it useful? Computing Research Repository, arXiv:1909.11291. Jon Gauthier and Igor Mordatch. 2016. A paradigm for situated and goal-driven language learning. Computing Research Repository, arXiv:1610.03585. Dedre Gentner and Kenneth D Forbus. 2011. Computational models of analogy. Wiley interdisciplinary reviews: cognitive science, 2(3):266–276. Mor Geva and Jonathan Berant. 2018. Learning to search in long documents using document structure. In Proceedings of the 27th International Conference on Computational Linguistics, pages 161–176, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Ofer Givoli and Roi Reichart. 2019. Zero-Shot Semantic Parsing for Instructions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4454–4464, Stroudsburg, PA, USA. Association for Computational Linguistics. Arthur M. Glenberg. 2008. Toward the Integration of Bodily States, Language, and Action. In Gun R. Semin and Eliot R. Smith, editors, Embodied Grounding, pages 43–70. Cambridge University Press, Cambridge. Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that require simple lexical inferences. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 650–655, Melbourne, Australia. Association for Computational Linguistics. H Paul Grice et al. 1975. Logic and conversation. Rick Grush. 2004. The emulation theory of representation: Motor control, imagery, and perception. Behavioral and Brain Sciences, 27(3):377–396. David Ha and J¨urgen Schmidhuber. 2018. Recurrent world models facilitate policy evolution. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 2450–2462. Curran Associates, Inc. Jessica B Hamrick. 2019. Analogues of mental simulation and imagination in deep learning. Current Opinion in Behavioral Sciences. 6279 Stevan Harnad. 1990. The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1):335 – 346. Leslie Pack Kaelbling, Michael L Littman, and Anthony R Cassandra. 1998. Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(1-2):99–134. Neel Kant. 2018. Recent advances in neural program synthesis. Computing Research Repository, arXiv:1802.02353. Daniel Keysers, Nathanael Sch¨arli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Marc van Zee, and Olivier Bousquet. 2020. Measuring Compositional Generalization: A Comprehensive Method on Realistic Data. In International Conference on Learning Representations (ICLR), pages 1–38. Douwe Kiela, Luana Bulat, Anita L Vero, and Stephen Clark. 2016. Virtual embodiment: A scalable long-term strategy for artificial intelligence research. Computing Research Repository, arXiv:1610.07432. Walter Kintsch and Teun A Van Dijk. 1978. Toward a model of text comprehension and production. Psychological review, 85(5):363. Thomas Kipf, Elise van der Pol, and Max Welling. 2020. Contrastive learning of structured world models. In International Conference on Learning Representations. Brenden Lake, Tal Linzen, and Marco Baroni. 2019. Human few-shot learning of compositional instructions. In Ashok Goel, Colleen Seifert, and Christian Freksa, editors, Proceedings of the 41st Annual Conference of the Cognitive Science Society, pages 611– 616. Cognitive Science Society, Montreal, Canada. Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. 2017a. Building machines that learn and think like people. Computing Research Repository, arXiv:arXiv:1604.00289v3. Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. 2017b. Ingredients of intelligence: From classic debates to an engineering roadmap. Computing Research Repository, arXiv:1604.00289. George Lakoff and Mark Johnson. 1980. The metaphorical structure of the human conceptual system. Cognitive science, 4(2):195–208. George Lakoff and Srini Narayanan. 2010. Toward a computational model of narrative. In 2010 AAAI Fall Symposium Series. Ron Langacker. 1987. 1991 foundations of cognitive grammar, 2 volumes. Neil D. Lawrence. 2017. Living together: Mind and machine intelligence. Computing Research Repository, arXiv:1705.07996. David Lewis. 1976. General semantics. In Montague grammar, pages 1–50. Elsevier. Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2016. Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision (Short Version). Computing Research Repository, arXiv:1612.01197. Percy Liang. 2016. Learning executable semantic parsers for natural language understanding. Communications of the ACM, 59(9):68–76. Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2019a. Kbert: Enabling language representation with knowledge graph. Computing Research Repository, arXiv:1909.07606. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. Computing Research Repository, arXiv:1907.11692. Reginald Long, Panupong Pasupat, and Percy Liang. 2016. Simpler context-dependent logical forms via model projections. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1456– 1465, Berlin, Germany. Association for Computational Linguistics. Jelena Luketina, Nantas Nardelli, Gregory Farquhar, Jakob Foerster, Jacob Andreas, Edward Grefenstette, Shimon Whiteson, and Tim Rockt¨aschel. 2019. A Survey of Reinforcement Learning Informed by Natural Language. Computing Research Repository, arXiv:1906.03926. Gary Lupyan and Benjamin Bergen. 2016. How Language Programs the Mind. Topics in Cognitive Science, 8(2):408–424. Gary Lupyan and Molly Lewis. 2019. From words-asmappings to words-as-cues: the role of language in semantic knowledge. Language, Cognition and Neuroscience, 34(10):1319–1337. James L. McClelland, Felix Hill, Maja Rudolph, Jason Baldridge, and Hinrich Sch¨utze. 2019. Extending Machine Language Models toward HumanLevel Language Understanding. Computing Research Repository, arXiv:1912.05877. R. Thomas McCoy, Junghyun Min, and Tal Linzen. 2019a. Berts of a feather do not generalize together: Large variability in generalization across models with similar test set performance. Computing Research Repository, arXiv:1911.02969. 6280 Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019b. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy. Association for Computational Linguistics. Melanie Mitchell. 1993. Analogy-Making as Perception: A Computer Model. MIT Press, Cambridge, MA, USA. Pramod Kaushik Mudrakarta, Ankur Taly, Mukund Sundararajan, and Kedar Dhamdhere. 2018. Did the model understand the question? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1896–1906, Melbourne, Australia. Association for Computational Linguistics. Srinivas Narayanan. 1997. Knowledge-based Action Representations for Metaphor and Aspect (KARMA). PhD dissertation, The University of California. Graham Nelson. 2005. Natural Language, Semantic Analysis and Interactive Fiction. IF Theory Reader, (April 2005):141–188. Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2019. Adversarial NLI: A New Benchmark for Natural Language Understanding. Computing Research Repository, arXiv:1910.14599. German I Parisi, Ronald Kemker, Jose L Part, Christopher Kanan, and Stefan Wermter. 2019. Continual lifelong learning with neural networks: A review. Neural Networks. Katerina Pastra, Eirini Balta, Panagiotis Dimitrakis, and Giorgos Karakatsiotis. 2011. Embodied language processing: a new generation of language technology. In Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence. Dennis R Proffitt. 2006. Embodied perception and the economy of action. Perspectives on psychological science, 1(2):110–122. James Pustejovsky and Nikhil Krishnaswamy. 2016. Voxml: A visualization modeling language. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 4606–4613. Behrang QasemiZadeh, Miriam R. L. Petruck, Regina Stodden, Laura Kallmeyer, and Marie Candito. 2019. SemEval-2019 task 2: Unsupervised lexical frame induction. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 16– 30, Minneapolis, Minnesota, USA. Association for Computational Linguistics. David E Rumelhart. 1981. Understanding understanding. Memories, thoughts and emotions: Essays in honor of George Mandler, 257:275. D. E. Rumlehart, P. Smolensky, J. L. McClelland, and G. E. Hinton. 1986. Schemata and Sequential Thought Processes in PDP Models, page 7–57. MIT Press, Cambridge, MA, USA. Josef Ruppenhofer, Michael Ellsworth, Miriam R. L. Petruck, Christopher R. Johnson, Collin F. Baker, and Jan Scheffczyk. 2016. FrameNet II: Extended Theory and Practice. ICSI: Berkeley. Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for ifthen reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3027–3035. J¨urgen Schmidhuber. 1990. Making the world differentiable: On using self-supervised fully recurrent neural networks for dynamic reinforcement learning and planning in non-stationary environments. Elad Segal, Avia Efrat, Mor Shoham, Amir Globerson, and Jonathan Berant. 2019. A simple and effective model for answering multi-span questions. Computing Research Repository, arXiv:1909.13375. Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Thirty-First AAAI Conference on Artificial Intelligence. Luc Steels and Joachim de Beule. 2006. A (very) brief introduction to fluid construction grammar. In Proceedings of the Third Workshop on Scalable Natural Language Understanding, pages 73–80, New York City, New York. Association for Computational Linguistics. Arjen Stolk, Lennart Verhagen, and Ivan Toni. 2016. Conceptual Alignment: How Brains Achieve Mutual Understanding. Trends in Cognitive Sciences, 20(3):180–191. Leonard Talmy. 1983. How language structures space. In Spatial orientation, pages 225–282. Springer. Leonard Talmy. 1985. Lexicalization patterns: Semantic structure in lexical forms. Language typology and syntactic description, 3(99):36–149. Ronen Tamari, Hiroyuki Shindo, Dafna Shahaf, and Yuji Matsumoto. 2019. Playing by the book: An interactive game approach for action graph extraction from text. In Proceedings of the Workshop on Extracting Structured Knowledge from Scientific Publications, pages 62–71, Minneapolis, Minnesota. Association for Computational Linguistics. Ronen Tamari, Gabriel Stanovsky, Dafna Shahaf, and Reut Tsarfaty. 2020. Ecological semantics: Programming environments for situated language understanding. Computing Research Repository, arXiv:2003.04567. 6281 Michael Tomasello. 2008. Human Cooperative Communication. Francisco J Varela, Evan Thompson, and Eleanor Rosch. 2017. The embodied mind: Cognitive science and human experience. MIT press. John Winn, John Guiver, Sam Webster, Yordan Zaykov, Martin Kukla, and Dany Fabian. 2019. Alexandria: Unsupervised high-precision knowledge base construction using a probabilistic program. In Automated Knowledge Base Construction (AKBC). Best Research Paper Award. Yiqun Yao, Jiaming Xu, Jing Shi, and Bo Xu. 2018. Learning to activate logic rules for textual reasoning. Neural Networks. Dani Yogatama, Cyprien de Masson d’Autume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, et al. 2019. Learning and evaluating general linguistic intelligence. Computing Research Repository, arXiv:1901.11373. Xingdi Yuan, Jie Fu, Marc-Alexandre Cote, Yi Tay, Christopher Pal, and Adam Trischler. 2019. Interactive machine comprehension with information seeking agents. Computing Research Repository, arXiv:1908.10449.
2020
559
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 600–608 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 600 Gated Convolutional Bidirectional Attention-based Model for Off-topic Spoken Response Detection Yefei Zha1 Ruobing Li1 Hui Lin1,2 1LAIX Inc. 2Shanghai Key Laboratory of Artificial Intelligence in Learning and Cognitive Science {leo.zha, ruobing.li, hui.lin}@liulishuo.com Abstract Off-topic spoken response detection, the task aiming at predicting whether a response is offtopic for the corresponding prompt, is important for an automated speaking assessment system. In many real-world educational applications, off-topic spoken response detectors are required to achieve high recall for off-topic responses not only on seen prompts but also on prompts that are unseen during training. In this paper, we propose a novel approach for offtopic spoken response detection with high offtopic recall on both seen and unseen prompts. We introduce a new model, Gated Convolutional Bidirectional Attention-based Model (GCBiA), which applies bi-attention mechanism and convolutions to extract topic words of prompts and key-phrases of responses, and introduces gated unit and residual connections between major layers to better represent the relevance of responses and prompts. Moreover, a new negative sampling method is proposed to augment training data. Experiment results demonstrate that our novel approach can achieve significant improvements in detecting off-topic responses with extremely high ontopic recall, for both seen and unseen prompts. 1 Introduction Off-topic spoken response detection is a crucial task in an automated assessment system. The task is to predict whether the response is off-topic for the corresponding question prompt. Table 1 shows an example of on-topic and off-topic responses for a prompt. Off-topic examples in human-rated data is often too sparse to train an automated scoring system to reject off-topic responses. Consequently, automated scoring systems tend to be more vulnerable than human raters to scoring inaccurately due to off-topic responses ( Lochbaum et al., 2013; Higgins and Heilman, 2014). To ensure the validity of speaking assessment scores, it is necessary to have a mechanism to flag off-topic responses before scores are reported (Wang et al., 2019). In our educational application, we use the automated speaking assessment system to help L2 learners prepare for the IELTS speaking test. We do see a higher rate of off-topic responses in freemium features as some users just play with the system. In such a scenario, accurate off-topic detection is extremely important for building trust and converting trial users to paid customers. Prompt: What kind of flowers do you like? On-topic: I like iris and it has different meaning of it a wide is the white and um and the size of a as a ride is means the ride means love but I can not speak. Off-topic: Sometimes I would like to invite my friends to my home and we can play the Chinese chess dishes this is my favorite games at what I was child. Table 1: An example of on-topic and off-topic responses for a prompt. Initially, many researchers used vector space model (VSM) ( Louis and Higgins, 2010; Yoon and Xie, 2014; Evanini and Wang, 2014) to assess the semantic similarity between responses and prompts. In recent years, with the blooming of deep neural networks (DNN) in natural language processing (NLP), many DNN-based approaches were applied to detect off-topic responses. Malinin et al. (2016) used the topic adapted Recurrent Neural Network language model (RNN-LM) to rank the topic-conditional probabilities of a response sentence. A limitation of this approach is that the model can not detect off-topic responses for new question prompt which was not seen in training data (unseen prompt). Later, off-topic response detection was considered as a binary classification task using end-to-end DNN models. Malinin et al. (2017) proposed the first end-to-end DNN 601 method, attention-based RNN (Att-RNN) model, on off-topic response detection task. They used a Bi-LSTM embedding of the prompt combined with an attention mechanism to attend over the response to model the relevance. CNNs may perform better than RNNs in some NLP tasks which require key-phrase recognition as in some sentiment detection and question-answer matching issues (Yin et al., 2017). Lee et al. (2017) proposed a siamese CNN to learn semantic differences between on-topic response-questions and off-topic response-questions. Wang et al. (2019) proposed an approach based on similarity grids and deep CNN. However, the cold-start problem of off-topic response detection has not been handled well by the aforementioned approaches. It is not until enough training data of unseen prompts are accumulated that good performance could be achieved. Besides, these methods draw little attention to the vital ontopic false-alarm problem for a production system. I.e., extremely high recall of on-topic responses is also required to make real-user-facing systems applicable. In this paper, to address the issues mentioned above, a novel approach named Gated Convolutional Bidirectional Attention-based Model (GCBiA) and a negative sampling method to augment training data are proposed. The key motivation behind our model GCBiA is as follows: convolution structure captures the key information, like salient n-gram features (Young et al., 2018) of the prompt and the response, while the bi-attention mechanism provides complementary interaction information between prompts and responses. Following R-Net (Wang et al., 2017) in machine comprehension, we add the gated unit as a relevance layer to filter out the important part of a response regarding the prompt. These modules contribute to obtaining better semantic matching representation between prompts and responses, which is beneficial for both seen and unseen prompts. Additionally, we add residual connections (He et al., 2016) in our model to keep the original information of each major layer. To alleviate the cold-start problem on unseen prompts, a new negative sampling data augmentation method is considered. We compare our approach with Att-RNN model and G-Att-RNN (our strong baseline model based on Att-RNN). Experiment results show that GCBiA outperforms these methods both on seen and unseen prompts benchmark conditioned on extremely high on-topic response recall (0.999). Moreover, the model trained with negative sampling augmented data achieves 88.2 average off-topic recall on seen prompts and 69.1 average off-topic recall on unseen prompts, respectively. In summary, the contribution of this paper is as follows: • We propose an effective model framework of five major layers on off-topic response detection task. The bi-attention mechanism and convolutions are applied to the focus on both topic words in prompts and key-phrase in responses. The gated unit as a relevance layer can enhance the relevance of prompts and responses. Besides, residual connections for each layer were widely used to learn additional feature mapping. Good semantic matching representation is obtained by these modules on both seen and unseen prompts. The GCBiA model achieves significant improvements by +24.0 and +7.0 off-topic recall on average unseen and seen prompts respectively, comparing to the baseline method. • To explore the essence of our proposed model, we conduct visualization analysis from two perspectives: bi-attention visualization and semantic matching representation visualization to reveal important information on how our model works. • To improve our result on unseen prompts further, we propose a novel negative sampling data augmentation method to enrich training data by shuffling words from the negative sample in off-topic response detection task. It allows the GCBiA model to achieve higher averaging off-topic recall on unseen prompts. 2 Approach 2.1 Task formulation The off-topic response detection task is defined as follows in this paper. Given a question prompt with n words XP = {xP t }n t=1 and the response sentence with m words XR = {xR t }m t=1, output one class o = 1 as on-topic or o = 0 as off-topic. 2.2 Model Overview We propose a model framework of five major layers on off-topic response detection task. The proposed 602 model GCBiA (shown in Figure 1) consists of the following five major layers: • Word Embedding Layer maps each word to a vector space using a pre-trained word embedding model. • Contextual Encoder Layer utilizes contextual information from surrounding words to reinforce the embedding of the words. These first two layers are applied to both prompts and responses. • Attention Layer uses the attention mechanism in both directions, prompt-to-response and response-to-prompt, which provides complementary information to each other. • Relevance Layer captures the important parts of the response regarding a prompt via the gated unit. • Output Layer predicts whether the response is off-topic given the prompt. In detail, each layer is illustrated as follows: 1. Word Embedding Layer. We first convert words to respective trainable word embeddings, initialized by pre-trained Glove (Pennington et al., 2014). The embeddings of prompts W P = {wP t }n t=1 and responses W R = {wR t }m t=1 are passed directly to the next contextual encoder layer. 2. Contextual Encoder Layer. A stack of convolutional layers are employed to extract salient n-gram features from prompts and responses, aiming at creating an informative latent semantic representation of prompts and responses for the next layer. The l-th convolutional layer with one filter is represented as cl i in Equation (1), where W ∈Rk×d, b ∈Rd. We ensure that the output of each stack matches the input length by padding the input of each stack. The number of convolutional layers l is 7, the kernel size k is 7 and the number of filters in each convolutional layer is 128. cl i = f(W l[cl−1 i−k/2, ..., cl−1 i+k/2] + bl) (1) After the convolutional representation of prompts U P and responses U R in Equation (23) are obtained, a max pooling layer to extract the fixed-length vector is performed, seen in Equation (4-5). Max-pooling can keep the most salient n-gram features across the whole prompt/response. U P = CONV (W P ) (2) U R = CONV (W R) (3) vP = maxpooling(U P ) (4) vR = maxpooling(U R) (5) 3. Attention Layer. In this layer, the attention mechanism is used in both directions, promptto-response and response-to-prompt, which provides complementary information to each other. However, unlike bi-attention applied to question answering and machine comprehension, including QANet (Yu et al., 2018), BiDAF (Seo et al., 2016) and BiDAF++ (Choi et al., 2018), we use max-pooling of CNN representation on prompt/response to summarize the prompt/response into a fixed-size vector. Prompt-to-Response Attention. Promptto-Response attention implicitly models which response words are more related to the whole prompt, which is crucial to assess the relevance of responses and prompts. Given max pooling vector vP of the prompt and CNN representation U R = {uR t }m t=1 of the response, together with W P = {wP t }n t=1 and W R = {wR t }m t=1, Prompt-to-Response attention cR is then calculated in Equation (6-10), where the similarity function used is trilinear function (Yu et al., 2018) and residual connections are used. ˜uR j = [uR j , wR j ] (6) ˜vP = [vP , avgpooling(W P )] (7) sj = W[˜uR j , ˜vP , ˜uR j ⊙˜vP ] (8) αi = exp(si) Pm j=1 exp(sj) (9) cR = m X i=1 αi˜uR i (10) Response-to-Prompt Attention. Similarly, Response-to-Prompt attention implicitly models which prompt words are more related to the whole response. The calculation of Response-to-Prompt attention, seen in Equation (11-15), is close to Prompt-to-Response 603 Figure 1: An overview of GCBiA. Residual connections were widely used to connect each two-layer. The first two layers are applied to both prompt and response. Convolutions are used in contextual encoder layer and bi-attention mechanism is applied in attention layer. After calculating by the relevance layer with the gated unit, the relevance vector is then fed into the output layer which consists of the normalization layer, dropout, two fully connection layers and softmax. attention. ˜uP j = [uP j , wP j ] (11) ˜vR = [vR, avgpooling(W R)] (12) sj = W[˜uP j , ˜vR, ˜uP j ⊙˜vR] (13) αi = exp(si) Pm j=1 exp(sj) (14) cP = n X i=1 αi˜uP i (15) 4. Relevance Layer. To capture the important parts of responses and attend to the ones relevant to the prompts, we use one gated unit in this layer seen in Equation (16-17). This gated unit focuses on the relation between the prompt and the response. Only relevant parts of each side can remain after the sigmoid operation. The input of this layer is (˜cR = [cR, vR], ˜cP = [cP , vP ]), which uses residual connections of the previous two layers. g = sigmoid(Wg[˜cR, ˜cP ]) (16) [˜cR, ˜cP ]∗= g ⊙[˜cR, ˜cP ] (17) 5. Output Layer. The fixed-length semantic matching vector produced by the previous layer and the previous second layer vector, are fed into the last output layer. It consists of one normalization layer, one dropout, two fully connected layers, and one softmax layer. The output distribution indicates the relevance of the prompt and the response. We classify the output into two categories on-topic or offtopic through the threshold. Different threshold is chosen for the different prompt to make sure the on-topic recall of the prompt meets the lowest requirement, such as 0.999 for the online product system in our study. 604 Part Prompt Part1 How long have you lived in your hometown? Part2 Describe something you bought according to an advertisement you saw. what it was where you saw or heard about it what it was about. Part3 Do you trust advertisements? Table 2: An example from our IELTS speaking test mobile app. 3 Data 3.1 Dataset Data from our IELTS speaking test mobile app1 was used for training and testing in this paper. There are three parts in the IELTS2 test: Part1 focuses on general questions about test-takers and a range of familiar topics, such as home, family, work, studies, and interests. In Part2, test-takers will be asked to talk about a particular topic. Discussion of more abstract ideas and issues about Part2 will occur in Part3. Here is an example from our IELTS speaking test mobile app, seen in Table 2. All responses from test-takers were generated from our automatic speech recognition (ASR) system, which will be briefly introduced in Section 3.2. Responses for a target prompt collected in our paid service were used as its on-topic training examples, and responses from the other prompts were used as the off-topic training examples for the target prompt. It is a reasonable setup because most of the responses in our paid service are on-topic (we labeled about 5K responses collected under our paid service and found only 1.3% of them are offtopic) and a certain level of “noise” in the training is acceptable. The test data was produced in the same way as the training data except that human validation was further introduced to ensure its validity. To ensure the authenticity of our train and test data further, we filter short responses for each part. The length of words from each response in Part1, Part2, and Part3 should be over 15, 50, and 15, respectively. Table 3 shows the details of our train and test datasets: 1.12M responses from 1356 prompts are used to train our model. The average number of 1https://www.liulishuo.com/ielts.html 2https://www.ielts.org/about-the-test/test-format responses to each prompt is 822. The number of on-topic and off-topic responses are 564.3K and 551.3K in training data. We divide the test data into two parts: seen benchmark and unseen benchmark. Prompts of the seen benchmark can appear in train data, while prompts of unseen benchmark cannot. The seen benchmark consists of 33.6K responses from 156 prompts, including 17.7K ontopic responses and 15.9K off-topic responses, and the average number of responses of each prompt is 216. In the unseen benchmark, there are 10.1K responses from 50 prompts, including 5.0K on-topic responses and 5.1K off-topic responses, and the average number of responses of each prompt is 202. 3.2 ASR System A hybrid deep neural network DNN-HMM system is used for ASR. The acoustic model contains 17 sub-sampled time-delay neural network layers with low-rank matrix factorization (TDNNF) (Povey et al., 2018), and is trained on over 8000 hours of speech, using the lattice-free MMI (Povey et al., 2016) recipe in Kaldi3 toolkit. A tri-gram LM with Kneser-Ney smoothing is trained using the SRILM4 toolkit and applied at first pass decoding to generate word lattices. An RNN-LM (Mikolov et al., 2010) is applied to re-scoring the lattices to achieve the final recognition results. The ASR system achieves a word error rate of around 13% on our 50 hours ASR test set. 3.3 Metric We use two assessment metrics in this paper: Average Off-topic Recall (AOR) and Prompt Ratio over Recall0.3 (PRR3). AOR denotes the average number of off-topic responses recall of all prompts (156 prompts on the seen benchmark and 50 prompts on the unseen benchmark). PPR3 denotes the ratio of prompts whose off-topic recall is over 0.3. Here is a case of AOR and PPR3 on seen benchmark: three prompts have 102, 102, and 102 offtopic responses, respectively. Suppose that we have recalled 100, 90 and 30 off-topic responses for the three prompts, off-topic recall of each prompt is 100/102=98.0%, 90/102=88.2%, and 30/102=29.4%. In this case AOR=(100/102 + 90/102 + 30/102)/3=71.9%, and PPR3=2/3=66.7%. To ensure that the off-topic detection is applicable 3http://kaldi-asr.org 4http://www.speech.sri.com/projects/srilm/ 605 Data #Prompt #Resp. #Resp./Prompt On-topic Off-topic Train 1356 1,12M 822 564.3K 551.3K Test Seen 156 33.6K 216 17.7K 15.9K Unseen 50 10.1K 202 5.0K 5.1K Table 3: The train and test datasets for off-topic detection task in real scenes, high on-topic recall (0.999 in this paper) is required. We give restriction that the ontopic recall on each prompt should be over 0.999 when calculating AOR and PPR3. 3.4 Training settings The model is implemented by Keras5. We use pretrained Glove as word embedding, the dimension of which is 300. The train and dev batch size are 1024 and 512. The kernel size, filter number, and block size of CNN are 7, 128, and 7 by tuning on the dev set. The fix-length of prompts and responses are 40 and 280 according to the length distribution of prompts and responses in the training data. Nadam (Dozat, 2016) is used as our optimizer with a learning rate of 0.002. The loss function is binary cross-entropy. The epoch size is 20, and we apply early-stop when dev loss has not been improving for three epochs. 4 Experiments 4.1 Results We carried out experiments on both seen benchmark and unseen benchmark mentioned in Section 3.1. As is shown in Table 4, Att-RNN is our baseline model. To make the evaluation more convincing, we built a stronger baseline model G-AttRNN based on Att-RNN by adding residual connections with each layer. Additionally, we add a gated unit as the relevance layer for our baseline model G-Att-RNN. Compared with Att-RNN, our baseline model G-Att-RNN achieved significant improvements on both seen (by +3.2 PPR3 points and +4.6 AOR points) and unseen benchmark (by +22.0 PPR3 points and +17.1 AOR). From Table 4, comparing with Att-RNN baseline, we can see that our approach GCBiA can achieve impressive improvements by +36.0 PPR3 points and +24.0 AOR points on the unseen benchmark, as well as +9.0 PPR3 points and +7.0 AOR points on the seen benchmark. Meanwhile, our approach significantly outperforms G-Att-RNN by 5https://keras.io/ +14.0 PPR3 points and + 6.9 AOR points on the unseen benchmark, as well as +5.8 PPR3 points and +2.4 AOR points on the seen benchmark. 4.2 Ablation Studies As gated unit and residual connections have been proved useful in Section 4.1, we conducted ablation analysis on seen and unseen benchmarks, seen in Table 4, to further study how other components contribute to the performance based on G-Att-RNN. Because topic words of the prompt were focused on, the bi-attention mechanism is beneficial to replace the uni-attention by adding response-toprompt attention, with +2.0 PPR3 points and +1.6 AOR points improvements on the unseen benchmark, as well as +2.6 PPR3 points and +1.5 AOR points on the seen benchmark. Besides, CNN with average-pooling applied to substitute RNN is also useful on the unseen benchmark by +10.0 PPR3 and +4.0 AOR points improvement. Though a little drop (-1.7% on seen AOR) in performance was caused by CNN with average-pooling, CNN with max-pooling can achieve improvements on the seen benchmark by +2.6 PPR3 and + 2.5 AOR in return. In general, CNN is more suitable than RNN for the contextual encoder layer in our model framework, for seen and unseen prompts. Finally, we also benefit from the residual connections for the gated unit with +2.8 AOR points improvement on the unseen benchmark. 4.3 Analysis In this section, we analyzed the essence of our model from two perspectives. One is the biattention mechanism visualization and the other is the dimension reduction analysis of the semantic matching representation. More details are illustrated as follows: Bi-Attention Visualization. Figure 2 gives the visualization of the bi-attention mechanism. Biattention mechanism can capture the interrogative “what” and topic words “spare time” of prompt “what do you do in your spare time” seen in subfigure 2a , capture the key-phrases “usually watch 606 Systems Model Seen Unseen PPR3 AOR PPR3 AOR Malinin et al., 2017 Att-RNN 84.6 72.2 32.0 21.0 Our baseline model G-Att-RNN 87.8 76.8 54.0 38.1 This work + Bi-Attention 90.4 78.3 56.0 39.7 + RNN→CNN 89.7 76.6 66.0 43.7 + maxpooling 92.3 79.1 68.0 42.2 + Res-conn in gated unit (GCBiA) 93.6 79.2 68.0 45.0 Table 4: The comparison of different models based on over 0.999 on-topic recall on seen and unseen benchmarks. AOR means Average Off-topic Recall (%) and PRR3 means Prompt Ratio over off-topic Recall 0.3 (%). (a) Attention on the prompt. (b) Attention on the on-topic response. (c) Attention on the off-topic response. Figure 2: The heatmap of attention on the prompt and response. movies” and “shopping” of the response seen in subfigure 2b, and capture the key-phrases “change name” and “name” seen in subfigure 2c. Due to the increased focus on the prompt, bi-attention is more beneficial for assessing the relevance of responses and prompts by matching the key phrases or words between them. The response in subfigure 2b is classified as on-topic, while the response in subfigure 2c is classified as off-topic. Semantic Matching Representation Visualization. As the output vector of the relevance layer using the gated unit can better represent the relevance of prompts and responses, the semantic matching representation was obtained from the relevance layer. With the help of t-SNE (Maaten and Hinton, 2008), the visualization result was shown in Figure 3. Subfigure 3a tells the true response distribution of one prompt, “describe a special meal that you have had, what the meal was, who you had this meal with and explain why this meal was special”, which has a clear-semantic topic “meal”. Meanwhile, subfigure 3b reveals the response distribution using our semantic matching representation on the same prompt as subfigure 3a . We can see that semantic matching representation of our model maintains good performance on this kind of prompt, which has one clear-semantic topic to limit the discussion in one scope. Additionally, some prompts are open to discuss, which are divergent. Given a case of the prompt “what do you do in your spare time”, and we can observe its true response distribution in subfigure 3c . Compared with it in subfigure 3c , our model tends to predict responses on-topic, seen in subfigure 3d , because high on-topic recall (0.999) is limited. 4.4 Negative Sampling Augmentation Method To investigate the impact of training data size, we conduct some experiments with varying sizes of training data. In figure 4, we find that the larger the training data size, the better the performance. Model Seen Unseen PPR3 AOR PPR3 AOR GCBiA 93.6 79.2 68.0 45.0 + neg sampling 94.2 88.2 79.4 69.1 Table 5: The performance of GCBiA with negative sampling augmentation method conditioned on over 0.999 on-topic recall. To augment training data and strengthen the generalization of the off-topic response detection 607 (a) True resp distribution on clear-semantic topic prompt. (b) Model’s resp distribution on clear-semantic topic prompt. (c) True response distribution on divergent prompt. (d) Model’s resp distribution on divergent prompt. Figure 3: The analysis of response distribution on different types of prompts. The yellow and black colours represent the on-topic and off-topic response results respectively. Figure 4: Trends of AOR (Average Off-topic Recall) on seen and unseen prompts with datasize variation. model for unseen prompts, we proposed a new and effective negative sampling method for offtopic response detection task. Comparing with the previous method of generating only one negative sample for each positive one, we generated two. The first one is chosen randomly as before, and the second one consists of words shuffled from the first one. This method contributes to the diversity of negative samples of training data. The size of our training data reaches 1.67M, compared with 1.12M in the previous negative sampling method. To make training data balanced, we gave the weight of positive and negative samples: 1 and 0.5, respectively. As is shown in Table 5, a significant performance improvement (+9.0 seen AOR and +24.1 unseen AOR) is achieved by this negative sampling method. Our model GCBiA equipped with negative sampling augmentation can achieve 88.2% and 69.1% average off-topic response recall on seen and unseen prompts, conditioned on 0.999 on-topic recall. 5 Conclusion In this paper, we conducted a series of work around the task of off-topic response detection. First of all, a model framework of five major layers was proposed, within which bi-attention mechanism and convolutions were used to well capture the topic words of prompts and key-phrase of responses, and gated unit as relevance layer was applied to better obtaining semantic matching representation, as well as residual connections with each major layer. Moreover, the visualization analysis of the off-topic model was given to study the essence of the model. Finally, a novel negative sampling augmentation method was introduced to augment off-topic training data. We verified the effectiveness of our approach and achieved significant improvements on both seen and unseen test data. Acknowledgments We are grateful to our colleague Bin Wang for helping with the ASR system. We thank our colleague Puyu Chen for proofreading. Last but not least, we thank the anonymous reviewers for their invaluable comments. References Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wentau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. 2018. Quac: Question answering in context. arXiv preprint arXiv:1808.07036. Timothy Dozat. 2016. Incorporating nesterov momentum into adam. Keelan Evanini and Xinhao Wang. 2014. Automatic detection of plagiarized spoken responses. In Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications, pages 22–27. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. 608 Derrick Higgins and Michael Heilman. 2014. Managing what we can measure: Quantifying the susceptibility of automated scoring systems to gaming behavior. Educational Measurement: Issues and Practice, 33(3):36–46. Chong Min Lee, Su-Youn Yoon, Xihao Wang, Matthew Mulholland, Ikkyu Choi, and Keelan Evanini. 2017. Off-topic spoken response detection using siamese convolutional neural networks. In INTERSPEECH, pages 1427–1431. Karen E Lochbaum, Mark Rosenstein, PW Foltz, Marcia A Derr, et al. 2013. Detection of gaming in automated scoring of essays with the iea. In National Council on Measurement in Education Conference (NCME), San Francisco, CA. Annie Louis and Derrick Higgins. 2010. Off-topic essay detection using short prompt texts. In proceedings of the NAACL HLT 2010 fifth workshop on innovative use of NLP for building educational applications, pages 92–95. Association for Computational Linguistics. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605. Andrey Malinin, Kate Knill, Anton Ragni, Yu Wang, and Mark JF Gales. 2017. An attention based model for off-topic spontaneous spoken response detection: An initial study. In SLaTE, pages 144–149. Andrey Malinin, Rogier C Van Dalen, Yu Wang, Katherine Mary Knill, and Mark John Gales. 2016. Off-topic response detection for spontaneous spoken english assessment. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Daniel Povey, Gaofeng Cheng, Yiming Wang, Ke Li, Hainan Xu, Mahsa Yarmohammadi, and Sanjeev Khudanpur. 2018. Semi-orthogonal low-rank matrix factorization for deep neural networks. In Interspeech, pages 3743–3747. Daniel Povey, Vijayaditya Peddinti, Daniel Galvez, Pegah Ghahremani, Vimal Manohar, Xingyu Na, Yiming Wang, and Sanjeev Khudanpur. 2016. Purely sequence-trained neural networks for asr based on lattice-free mmi. In Interspeech, pages 2751–2755. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 189–198. Xinhao Wang, Su-Youn Yoon, Keelan Evanini, Klaus Zechner, and Yao Qian. 2019. Automatic detection of off-topic spoken responses using very deep convolutional neural networks. Proc. Interspeech 2019, pages 4200–4204. Wenpeng Yin, Katharina Kann, Mo Yu, and Hinrich Sch¨utze. 2017. Comparative study of cnn and rnn for natural language processing. arXiv preprint arXiv:1702.01923. Su-Youn Yoon and Shasha Xie. 2014. Similaritybased non-scorable response detection for automated speech scoring. In Proceedings of the Ninth Workshop on Innovative Use of NLP for Building Educational Applications, pages 116–123. Tom Young, Devamanyu Hazarika, Soujanya Poria, and Erik Cambria. 2018. Recent trends in deep learning based natural language processing. ieee Computational intelligenCe magazine, 13(3):55–75. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541.
2020
56
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282–6293 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6282 The State and Fate of Linguistic Diversity and Inclusion in the NLP World Pratik Joshi∗Sebastin Santy∗ Amar Budhiraja∗ Kalika Bali Monojit Choudhury Microsoft Research, India {t-prjos, t-sesan, amar.budhiraja, kalikab, monojitc}@microsoft.com Abstract Language technologies contribute to promoting multilingualism and linguistic diversity around the world. However, only a very small number of the over 7000 languages of the world are represented in the rapidly evolving language technologies and applications. In this paper we look at the relation between the types of languages, resources, and their representation in NLP conferences to understand the trajectory that different languages have followed over time. Our quantitative investigation underlines the disparity between languages, especially in terms of their resources, and calls into question the “language agnostic” status of current models and systems. Through this paper, we attempt to convince the ACL community to prioritise the resolution of the predicaments highlighted here, so that no language is left behind. 1 The Questions Languages X and Y are the official languages of two different countries; they have around 29M and 18M native speakers, and 2M and 5.5K Wikipedia articles, respectively. X is syntactically quite similar to English, though uses dimunitives and has grammatical gender. Y, on the other hand, has a different word order from English, and has a rare typological feature - generally it is a head-final language, but noun phrases are head-initial. It also features full and partial reduplication. 69 items on LDC and ELRA contain data in X, whereas for Y there are only 2 items. X boasts of some of the best online machine translation systems, whereas Y is supported by very few online MT systems and that too with far inferior translation quality. Figure 1 shows the number of papers in conferences (ACL, NAACL, EACL, EMNLP, LREC, WS) that ∗Authors contributed equally to the work. https://microsoft.github.io/linguisticdiversity (a) ACL + NAACL + EACL + EMNLP (b) LREC + WS Figure 1: Number of papers with mentions of X and Y language for two sets of conferences. mention X and Y in the paper, across the years. As you can see, while X has a steady and growing trend of research, our community has been mostly oblivious to Y, until recently when some of the zero-shot learning papers have started mentioning it. Can you guess what X and Y are? Regardless of whether you can guess the exact answer, most NLP researchers surely know of (and might even speak) several languages which are in the same boat as X; languages which have a large amount of resources and therefore access to the benefits of the current NLP breakthroughs, and languages like Y; those which lack resources and consequently the attention of the NLP community, despite having similar speaker base sizes and typologically diverse features. You probably have come across the issue of extremely skewed distribution of resources across the world’s languages before. You might also be aware of the fact that most of our NLP systems, which are typically declared language agnostic, are not truly so (Bender, 2011). The handful of languages on which NLP systems are trained and tested are often related and from the same geography, drawn from a few dominant language families, leading to a typological echo-chamber. As a result, a vast majority of typologically diverse linguistic phenomena are never seen by our NLP systems (Ponti et al., 2019). 6283 Nevertheless, it would be prudent to re-examine these issues in the light of recent advances in deep learning. Neural systems, on one hand, require a lot more data for training than rule-based or traditional ML systems, creating a bigger technological divide between the Xs and Ys; yet, some of the most recent techniques on zero-shot learning of massively multilingual systems (Devlin et al., 2019; Conneau and Lample, 2019; Aharoni et al., 2019; Artetxe and Schwenk, 2019) bridge this gap by obliterating the need for large labeled datasets in all languages. Instead, they need only large unlabeled corpora across languages and labeled data in only some languages. Assuming that this approach can be taken to its promising end, how does the fate of different languages change? We break down this complex prescient question into the following more tractable and quantifiable questions on Linguistic Diversity and Inclusion: 1. How many resources, labeled and unlabeled, are available across the World’s languages? How does this distribution correlate to their number of native speakers? What can we expect to achieve today and in the near future for these languages? 2. Which typological features have current NLP systems been exposed to, and which typological features mostly remain unexplored by systems because we have hardly created any resources and conducted data-driven research in those languages? 3. As a community, how inclusive has ACL been in conducting and publishing research on various languages? In 1980s and early 90s, when large scale datasets were not the prime drivers of research, was the linguistic diversity of ACL higher than what it has been in 2000s and 2010s? Or has ACL become more inclusive and diverse over the years? 4. Does the amount of resource available in a language influence the research questions and the venue of publication? If so, how? 5. What role does an individual researcher, or a research community have to play in bridging the linguistic-resource divide? In this paper, we take a multi-pronged quantitative approach to study and answer the aforementioned questions, presented in order, in the following five sections. One of the key findings of our study, to spill the beans a bit, is that the languages of the World can be broadly classified into 6 classes based on how much and what kind of resources they have; the languages in each class have followed a distinct and different trajectory in the history of ACL, and some of the hitherto neglected classes of languages have more hope of coming to the forefront of NLP technology with the promised potential of zero-shot learning. 2 The Six Kinds of Languages In order to summarize the digital status and ‘richness’ of languages in the context of data availability, we propose a taxonomy based on the number of language resources which exist for different languages. We frame the rest of our analyses based on this taxonomy and use it to emphasize the existence of such resource disparities. 2.1 Features We design this taxonomy using two feature axes: number of unlabeled resources vs. number of labeled resources. Previous methods have mostly relied on supervised learning techniques which require labeled corpora. However, the advent of transfer learning methods have boosted the importance of unlabeled data: massively multilingual models such as mBERT use Wikipedia for pre-training, and then fine-tune on downstream NLP tasks. These features are suitable because the current NLP research is predominantly data-driven, and language inclusion depends on how much labeled or unlabeled data is available. We believe these features are sufficient for the taxonomical design as the required metadata is consistently available across all languages, whereas features such as number of hours required to collect data aren’t available. We treat each data resource as a fundamental unit, based on the assumption that the collection of one unit is proportional to a certain extent of effort being invested towards the resource improvement of that language. Moreover, this feature discretization is unambiguous and concrete. Other units such as the total number of datapoints across datasets can be misleading because different NLP tasks have different data requirements. For example, while Machine Translation (MT) models require datapoints to the order of millions (Koehn and Knowles, 2017) to perform competitively, competent models in Question Answering require around 100 thousand datapoints (Rajpurkar et al., 2016). Moreover, the unit of datapoints vary across different technologies (e.g. Speech data measured in hours, MT data measured in number of parallel sentences). 6284 Figure 2: Language Resource Distribution: The size of the gradient circle represents the number of languages in the class. The color spectrum VIBGYOR, represents the total speaker population size from low to high. Bounding curves used to demonstrate covered points by that language class. 2.2 Repositories We focus our attention on the LDC catalog1 and the ELRA Map2 for labeled datasets. Although there are other repositories of data available online, we found it practical to treat these organized collections as a representation of labeled dataset availability. This way, we look at standardized datasets that have established data quality and consistency, and which have been used in prior work. There are strong efforts such as PanLex (Kamholz et al., 2014), which is a large lexical database of a wide range of languages being used for a lexical translator, and OLAC (Simons and Bird, 2003), which contains a range of information for different languages (e.g. text collections, audio recordings, and dictionaries). However, keeping within the purview of NLP datasets used in *CL conferences, we decided to focus on popular repositories such as the above-mentioned. We look at Wikipedia pages as a measure for unlabeled data resources. With regards to language technologies, Wikipedia pages represent a strong source of unsupervised training data which are freely and easily accessible. In the perspective of digital resource availability, they are a comprehensive source of factual information and are accessed by a large, diverse set of online users. 2.3 Language Classes Figure 2 is a visualization of the taxonomy. We find a set of distinct partitions which can be used 1https://catalog.ldc.upenn.edu/ 2http://catalog.elra.info/en-us/ to categorize languages into 6 unique positions in the language resource ‘race’: 0 - The Left-Behinds These languages have been and are still ignored in the aspect of language technologies. With exceptionally limited resources, it will be a monumentous, probably impossible effort to lift them up in the digital space. Unsupervised pre-training methods only make the ‘poor poorer’, since there is virtually no unlabeled data to use. 1 - The Scraping-Bys With some amount of unlabeled data, there is a possibility that they could be in a better position in the ‘race’ in a matter of years. However, this task will take a solid, organized movement that increases awareness about these languages, and also sparks a strong effort to collect labelled datasets for them, seeing as they have almost none. 2 - The Hopefuls With light at the end of the tunnel, these languages still fight on with their gasping breath. A small set of labeled datasets has been collected for these languages, meaning that there are researchers and language support communities which strive to keep them alive in the digital world. Promising NLP tools can be created for these languages a few years down the line. 3 - The Rising Stars Unsupervised pre-training has been an energy boost for these languages. With a strong web presence, there is a thriving cultural community online for them. However, they have been let down by insufficient efforts in labeled data collection. With the right steps, these languages can be very well off if they continue to ride the ‘pre-training’ wave. 4 - The Underdogs Powerful and capable, these languages pack serious amounts of resource ‘firepower’. They have a large amount of unlabeled data, comparable to those possessed by the winners, and are only challenged by lesser amount of labeled data. With dedicated NLP communities conducting research on these languages, they have the potential to become winners and enjoy the fruits of ‘digital superiority’. 5 - The Winners Running strong and fast, these languages have been in the lead for quite a while now, some longer than others. With a dominant online presence, there have been massive industrial and government investments in the development of resources and technologies for these languages. They are the quintessential rich-resource 6285 Class 5 Example Languages #Langs #Speakers % of Total Langs 0 Dahalo, Warlpiri, Popoloca, Wallisian, Bora 2191 1.2B 88.38% 1 Cherokee, Fijian, Greenlandic, Bhojpuri, Navajo 222 30M 5.49% 2 Zulu, Konkani, Lao, Maltese, Irish 19 5.7M 0.36% 3 Indonesian, Ukranian, Cebuano, Afrikaans, Hebrew 28 1.8B 4.42% 4 Russian, Hungarian, Vietnamese, Dutch, Korean 18 2.2B 1.07% 5 English, Spanish, German, Japanese, French 7 2.5B 0.28% Table 1: Number of languages, number of speakers, and percentage of total languages for each language class. En Es Ko In, Bh, Gr, Dh, Wl Zu Bn 𝜸 = 1.461 La It Rank of Language 100 101 102 103 No. of Resources 103 102 101 100 (a) LDC En Es Bn Ko In Zu La,Bh, Gr,Dh,Wl It 𝜸 = 1.512 Rank of Language 100 101 102 103 No. of Resources 103 102 101 100 (b) LRE En Es Ko In Bn Bh Zu Gr Dh, Wl It La Rank of Language 0 100 200 300 𝜸 = 0.015 No. of Articles 108 106 102 100 104 (c) Wikipedia En Es Ko In Bn La,Gr,ZuBh,Dh,Wl It 𝜸 = 1.194 Rank of Language 100 101 102 103 Percentage of Web Pages 102 101 100 10-1 (d) Web Figure 3: Plots of different available resources for different languages. Languages to the far right do not have a representation in the resource category. Languages annotated are: Class 0-Dahalo (Dh), Wallisian(Wl); Class 1-Bhojpuri (Bh), Greenlandic (Gr); Class 2-Lao (La), Zulu (Zu); Class 3- Bengali (Bn), Indonesian (In); Class 4- Korean (Ko), Italian (It); Class 5- English (En), Spanish (Es). languages, reaping benefit from each state-of-theart NLP breakthrough. Some more information about the taxonomy is shown in Table 1. We also take 10 languages, and annotate their positions in Figure 3. 2.4 Findings On your marks As can be seen in Figure 3, the Winners take pole position in all rankings, and Class 0 languages remain ‘out of the race’ with no representation in any resource. The Wikipedia distribution seems to be more fair for classes 1, 2, and 3 when compared to classes 4 and 5, whereas the Web distribution has a clear disparity. Talk ain’t cheap Looking at Table 1, we see that Class 0 contains the largest section of languages and represents 15% of all speakers across classes. Although there is a large chunk of speakers which converse with Class 5 languages, the lack of technological inclusion for different languages could draw native speakers away from Class 0 languages and towards Class 5, exacerbating the disparity. 3 Typology Linguistic typology is a field which involves the classification of languages based on their structural and semantic properties. Large-scale efforts have led to the creation of a database of typological features (Dryer and Haspelmath, 2013). Such documentation becomes important as there are barely any other classifications of similar scale. In the context of NLP research, there has been work indicating the effectiveness of injecting typological information to guide the design of models (Ponti et al., 2019). Also, transfer learning of resourcerich to resource-poor languages have been shown to work better if the respective languages contain similar typological features (Pires et al., 2019). We look at how skewed language resource availability leads to an under-representation of certain typological features, which may in turn cause zero-shot inference models to fail on NLP tasks for certain languages. We look at the WALS data (Dryer and Haspelmath, 2013), which contains typological features for 2679 languages. There are a total of 192 typological features, with an average of 5.93 categories per feature. We take the languages in classes 0, 1, 2, all of which have limited or no data resources as compared to 3, 4, 5 and look at how many categories, across all features, exist in classes 0, 1, 2 but not 3, 4, 5. This comes to a total of 549 out of 1139 unique categories, with an average of 2.86 categories per feature being ignored. Typological features with the most and least ‘ignored’ categories are shown in Table 2. To get an idea of what these typological ‘exclu6286 Feature #Cat #Lang 144E 23 38 144M 23 45 144F 22 48 144O 21 30 Feature #Cat #Lang 83A 0 1321 82A 0 1302 97A 0 1146 86A 0 1083 Table 2: Most and least ‘ignored’ typological features, the number of categories in each feature which have been ignored, and the number of languages which contain this feature. Language Class #Speakers ‘Ignored’ Error Amharic 2 22M 9 60.71 Breton 1 210k 7 83.50 Swahili 2 18M 8 45.64 Kabyle 1 5.6M 8 39.10 Table 3: Relevant examples of typologically ‘excluded’ languages. The error rate is that of English →Language from Artetxe and Schwenk (2019). sions’ mean in the context of modern multilingual methods, we look at the specific languages which contain these ‘excluded’ categories in the respective features, and compare their performances in similarity search, from the results of Artetxe and Schwenk (2019). Table 3 shows some examples of how ‘ignored’ features have been difficult to deal with even when jointly training of all languages. 3.1 Findings Far-reaching repercussions The most ‘ignored’ feature in Table 2, 144E (Multiple Negative Constructions in SVO Languages), is a rare feature, existing in only 38 languages over the world. These languages, however, are from various regions (e.g. Wolof, Icelandic, and Kilivila). Language tools in all these areas can be adversely affected without sufficient typological representation. On the other hand, common features such as 83A (Order of Object and Verb) are well represented with definite feature values for 1321 languages, ranging from English to Mundari. Does it run in the family? Amharic, in Table 3, which among the Semitic family of languages, is the second most spoken language after Arabic (which has 300M speakers). However, it has 9 ‘ignored’ typological features, whereas Arabic has none. This reflects in the error rate of English to Amharic (60.71), which is significantly worse compared to 7.8 for English to Arabic. 4 Conference-Language Inclusion NLP conferences have a huge impact on how language resources and technologies are constructed. Exciting research in venues such as ACL, EMNLP, LREC have the ability to turn heads in both industry and government and have the potential to attract funds to a particular technology. Has the usage of a small set of resource-rich languages in such conferences led to a disparity, pushing the less represented to the bottom of the ladder in terms of research? We analyze the involvement of various languages in NLP research conferences over the years. 4.1 Dataset The ACL Anthology Corpus (ACL-ARC) (Bird et al., 2008) is the most extensively used dataset for analyzing trends in NLP research. This dataset contains PDFs, and parsed XMLs of Anthology papers. However, the latest versioned copy of ACLARC is till 2015 which makes it insufficient for analyzing trends in the most recent years. Moreover, paper data for non-ACL conferences such as LREC, COLING are absent from this dataset. In order to create a consistent data model, we augment this dataset by using Semantic Scholar’s API and scraping ACL Anthology itself. Thus, we gather a consolidated dataset for 11 conferences which are relevant in judging global trends in NLP research. These include ACL, NAACL, EMNLP, EACL, COLING, LREC, CONLL, Workshops (WS) (all since 1990), SEMEVAL, TACL and CL Journals. We have attached the statistics of the dataset in Appendix A. 4.2 Analysis 4.2.1 Language Occurrence Entropy The primary step of measuring the language diversity and inclusion of a conference and their progress is to measure the usage of language in that conference over multiple iterations. One of the ways to do it is by using frequency-based techniques where we can measure the occurrence of languages in that iteration. However, it is not a unified measure which represents the nature of language distribution with a single number. To this end, we use entropy as our metric to measure language inclusivity of each conference. It efficiently captures the skew in the distribution of languages, 6287 (a) c = ACL (b) c = NAACL (c) c = EMNLP (d) c = EACL (e) c = COLING (f) c = CL (g) c = WS (h) c = CONLL (i) c = SEMEVAL (j) c = LREC Figure 4: Language occurrence entropy over the years for different conferences ({S}c,y). thereby making the disparity in language usage more clearer. The language occurrence entropy is calculated as follows: For a conference c held in year y having P papers, there exists a binary matrix {MP×L}c,y where Mij is 1 if ith paper (∈P) mentions the jth language (∈L). Then the entropy {S}c,y is: {Sj}c,y = 1 P P X i=1 {Mij}c,y {S′ j}c,y = {Sj}c,y PL j=1{Sj}c,y {S}c,y = − L X j=1 {S′ j}c,yloge{S′ j}c,y (1) where {Sj}c,y is a array of length L accounting for number of papers in a specific language, {S′ j}c,y is normalization done in order to get probability distribution for calculating entropy. In short, the higher the entropy, the more spread out is the distribution over the languages. The more peaked or skewed the distribution is, the lower is the entropy. In Figure 4, we can observe the entropy S plotted for each c as a function of y. 4.2.2 Class-wise Mean Reciprocal Rank To quantify the extent of inclusion of language classes from our taxonomy in different conferences, we employ class-wise Mean Reciprocal Rank (MRR) as a metric. This helps in determining the standing of each class in a conference. If the rank of the language (ranki) is ordered by the frequency of being mentioned in papers of a particular conference, and Q is the total number of queries aka number of languages in each class, then: MRR = 1 |Q| |Q| X i=1 1 ranki (2) Table 4 shows inverse mean reciprocal ranks of each category for a conference. The smaller the inverse MRR value, the more inclusive that conference is to that language class. Conf / Class 0 1 2 3 4 5 ACL 725 372 157 63 20 3 CL 647 401 175 76 27 3 COLING 670 462 185 74 21 2 CONLL 836 576 224 64 16 3 EACL 839 514 195 63 15 3 EMNLP 698 367 172 67 19 3 LREC 811 261 104 45 13 2 NAACL 754 365 136 63 18 3 SEMEVAL 730 983 296 121 19 3 TACL 974 400 180 50 15 3 WS 667 293 133 59 15 3 Table 4: Class-wise (1/MRR) for each conference. 4.3 Findings All-Inclusive Looking at the combined trends, both the entropy plots and the MRR figures suggest that LREC and WS have been the most inclusive across all categories and have been continuing to do so over the years. A ray of hope With regards to the proceedings of ACL, EMNLP, NAACL, LREC, we note a marked spike in entropy in the 2010s, which is absent in 6288 other conferences. This might be due to the increased buzz surrounding cross-lingual techniques. The later the merrier An interesting point to note is that conferences which started later have taken lessons from past in matters of language inclusion. While the earlier established conferences have continued to maintain interest in a particular underlying theme of research which may or may not favour multilingual systems. This can be observed in : COLING, ACL, EACL, EMNLP (order of their start dates). Falling off the radar The taxonomical hierarchy is fairly evident when looking at the MRR table (Table 4) with class 5 coming within rank 2/3 and class 0 being ‘left-behind’ with average ranks ranging from 600 to 1000. While the dip in ranks is more forgiving for conferences such as LREC, WS, it is more stark in CONLL, TACL, SEMEVAL. 5 Entity Embedding Analysis The measures discussed in the previous section signal at variance in acceptance of different languages at different NLP venues across time. However, there are usually multiple subtle factors which vanilla statistics fail to capture. Embeddings, on the other hand, have been found extensively useful in NLP tasks as they are able to learn relevant signals directly from the data and uncover these rather complex nuances. To this end, we propose a novel approach to jointly learn the representations of conferences, authors and languages, which we collectively term as entities. The proposed embedding method allows us to project these entities in the same space enabling us to effectively reveal patterns revolving around them. 5.1 Model We define the following model to jointly learn the embeddings of entities such that entities which have similar contextual distributions should cooccur together. For example, for an author A, who works more extensively on language Li than Lj and publishes more at conference Cm than at conference Cn, the embeddings of A would be closer Li than Lj and Cm than Cn. Given an entity and a paper associated with the entity, the learning task of the model is to predict K randomly sampled words from the title and the abstract of the paper. We only select the title and abstract as compared to the entire paper text as Entity Input (E-dim) Hidden Layer ek hi N-dim WE×N WN×V WN×V WN×V Word Output (V-dim) y1,j y2,j yC,j Figure 5: Model architecture to learn entity embeddings. WE×N is the weight matrix from input layer (entity layer) to the hidden layer, and WN×V is the weight matrix for the hidden layer to output layer computation. At the end of training, WE×N is the matrix containing embeddings of entities and WN×V is the matrix containing the embeddings of words. they provide a concise signal with reduced noise. This model draws parallels to the Skipgram model of Word2Vec (Mikolov et al., 2013), where given an input word in Skipgram model, the task is to predict the context around the word. The input entity and K randomly sampled words in our case correspond to the input word and context in the Skipgram model. The goal of the model is to maximize probability of predicting the random K words, given the entity id as the input: 1 M 1 K M X m=1 K X k=1 I X i=1 p(wk|E<i,Pj>) (3) where E<i,Pj> is the entity Ei which is associated with the Pjth paper and p is the probability of predicting the word wi out of the K words sampled from the paper and M is the total number of papers in the dataset. To optimize for the above distribution, we define the typical SGD based learning strategy similar to Word2Vec(Mikolov et al., 2013). Figure 5 shows an outline of the model. The entity input layer has dimension equal to the total number of entities in the dataset (E). Hidden layer size is set to the desired embedding dimension (N). The output layer predicts words for the input entity and is of the same size as the vocabulary (V ). The entities we learn are: (1) authors of the paper, (2) languages mentioned in the paper, (3) conference where the paper was accepted (e.g. ACL), and (4) the conference iteration (e.g. ACL’19). We describe the model detail and hyperparameter tuning in Appendix A. 6289 Figure 6: t-SNE visualization of the learnt conference and language embeddings. Class MRR(5) MRR(10) MRR(15) MRR(20) 0 0.72281 0.69146 0.63852 0.57441 1 0.57210 0.52585 0.45354 0.40904 2 0.47039 0.45265 0.41521 0.38157 3 0.59838 0.52670 0.45131 0.42899 4 0.56016 0.47795 0.51199 0.50681 5 0.56548 0.51471 0.54326 0.47619 Table 5: Language-Author-Language MRR on Taxonomy Classes. MRR(K) considers the closest K authors. 5.2 Analysis In order to better understand how languages are represented at different venues, we visualize the distribution of entity embeddings by projecting the generated embeddings into 2 dimensions using tSNE (Maaten and Hinton, 2008) (as shown in Figure 6). For clarity, we only plot ACL, LREC, WS and CL among the conferences, and all languages from the taxonomy, except those in Class 0. We omit plotting Class 0 languages as their projections are noisy and scattered due to their infrequent occurrence in papers. To understand the research contributions of individual authors or communities towards research in respective language classes, we leverage the distribution between author and language entities by computing a variation of the Mean Reciprocal Rank (MRR). We consider a language L, and take the K closest authors to L using cosine distance, and then take the closest M languages to each author. If L is present in the closest languages of an author, then we take the rank of L in that list, inverse it, and average it for the K authors. To compute this metric for a class of languages from the taxonomy, we take the mean of the MRR for all languages in that class. We fix M to be 20, so as to understand the impact of the community when the number of languages remains unchanged. Table 5 shows the MRR of various class of languages. A higher value of this measure indicates a more focused community working on that particular language, rather than a diverse range of authors. 5.3 Findings Time waits for no conference We can see a left to right trend in Figure 6 with ACL in 1983 in the left, and subsequent iterations laid out as we go right. We observe the same trend for EACL, NAACL, EMNLP, CONLL, TACL, and COLING. We can say that the axis represents the progression of time to a certain extent. Alternatively, it may even represent a shift in the focus of NLP research, moving from theoretical research focused on grammar and formalisms on the left to a data-driven, more ML-oriented approach on the right. This can be observed as most of the CL embeddings are positioned on the left given their theoretical research focus. Long distance relationships? From Figure 6, we can note that the less-resourced language classes are farther away from the trend-line of ACL than the more resourced ones, with class 5 being closest, and class 1 being farthest. The visualization illustrates that languages are spreading out radially downwards from the ACL trendline with popular classes of taxonomy like class 5 and class 4 being closer while others spreading out farther. Again, as 6290 previous analyses have shown us, LREC and WS embeddings are closer to the language embeddings as compared to the other conferences as shown in Figure 6. In fact, LREC cluster is right in the middle of language clusters and so is the major part of the WS cluster, especially in recent iterations. Not all heroes wear capes Table 5 shows the MRR for each class of languages in the taxonomy. From Table 5, it can be seen that class 0 has the highest MRR across different K values. This shows that perhaps low resource languages have some research groups solely focused on the challenges related to them. There is a decreasing trend of MRR from class 0 to class 5, except for class 2, thereby indicating that more popular languages are addressed by more authors. We also observe that even though Japanese, Mandarin, Turkish and Hindi (MRR(10) > 0.75) are part of class 5 and class 4, their MRR is higher even compared to low resource languages in another classes, indicating that these languages have focused research communities working on them. On the other end of the spectrum, we observe a lot of low resource languages like Burmese (MRR(10) = 0.02), Javanese (MRR(10) = 0.23) and Igbo (MRR(10) = 0.13) which have millions of speakers but significantly low MRR values, potentially indicating that not a lot of attention is being given to them in the research community. 6 Conclusion We set out to answer some critical questions about the state of language resource availability and research. We do so by conducting a series of quantitative analyses through the lens of a defined taxonomy. As a result, we uncover a set of interesting insights and also yield consistent findings about language disparity: — The taxonomical hierarchy is repeatedly evident from individual resource availabilities (LDC, LRE, Wikipedia, Web), entropy calculations for conferences, and the embeddings analysis. — LREC and Workshops(WS) have been more inclusive across different classes of languages, seen through the inverse MRR statistics, entropy plots and the embeddings projection. — There are typological features (such as 144E), existing in languages over spread out regions, represented in many resource-poor languages but not sufficiently in resource-rich languages. This could potentially reduce the performance of language tools relying on transfer learning. — Newer conferences have been more languageinclusive, whereas older ones have maintained interests in certain themes of research which don’t necessarily favour multilingual systems. — There is a possible indication of a time progression or even a technological shift in NLP, which can be visualized in the embeddings projection. — There is hope for low-resource languages, with MRR figures indicating that there are focused communities working on these languages and publishing works on them, but there are still plenty of languages, such as Javanese and Igbo, which do not have any such support. We believe these findings will play a strong role in making the community aware of the gap that needs to be filled before we can truly claim state-ofthe-art technologies to be language agnostic. Pertinent questions should be posed to authors of future publications about whether their proposed language technologies extend to other languages. There are ways to improve the inclusivity of ACL conferences. Special tracks could be initiated for low-resource, language-specific tasks, although we believe that in doing so, we risk further marginalization of those languages. Instead, a way to promote change could be the addition of D&I (Diversity and Inclusion) clauses involving language-related questions in the submission and reviewer forms: Do your methods and experiments apply (or scale) to a range of languages? Are your findings and contributions contributing to the inclusivity of various languages? Finally, in case you’re still itching to know, Language X is Dutch, and Y is Somali. 7 Acknowledgements We would like to thank Anshul Bawa, Adithya Pratapa, Ashish Sharma for their valuable feedback during the final phase of work. We would also like to thank the anonymous reviewers for their many insightful comments and suggestions. References Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technolo6291 gies, Volume 1 (Long and Short Papers), pages 3874–3884, Minneapolis, Minnesota. Association for Computational Linguistics. Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597–610. Emily M. Bender. 2011. On achieving and evaluating language-independence in NLP. In Vol 6: Interaction of Linguistics and Computational Linguistics. Steven Bird, Robert Dale, Bonnie J Dorr, Bryan Gibson, Mark Thomas Joseph, Min-Yen Kan, Dongwon Lee, Brett Powley, Dragomir R Radev, and Yee Fan Tan. 2008. The ACL anthology reference corpus: A reference dataset for bibliographic research in computational linguistics. LREC’08. Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 7059– 7069. Curran Associates, Inc. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. David Kamholz, Jonathan Pool, and Susan Colowick. 2014. PanLex: Building a resource for panlingual lexical translation. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 3145– 3150, Reykjavik, Iceland. European Language Resources Association (ELRA). Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28–39, Vancouver. Association for Computational Linguistics. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(Nov):2579–2605. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996– 5001, Florence, Italy. Association for Computational Linguistics. Edoardo Maria Ponti, Helen O’Horan, Yevgeni Berzak, Ivan Vuli´c, Roi Reichart, Thierry Poibeau, Ekaterina Shutova, and Anna Korhonen. 2019. Modeling language variation and universals: A survey on typological linguistics for natural language processing. Computational Linguistics, 45(3):559–601. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Gary Simons and Steven Bird. 2003. The open language archives community: An infrastructure for distributed archiving of language resources. Computing Research Repository - CORR, 18:117–128. 6292 A Appendix A.1 Embedding Visualization We have compiled a visualization of the embedding space of conferences and languages which can be run on a browser. This is available as an interactive visualization on https://microsoft.github.io/linguisticdiversity, and can be used to play around with different combinations to see how NLP research has progressed over the years in terms of language inclusion. The legends are self-explanatory and are clickable to add or remove those points. The numbers in the legend represent the respective classes. A.2 ACL Anthology Dataset Statistics We have accounted for all the papers which have appeared in the main track proceedings of the conference. This includes all the long and short papers and excludes System Demonstrations, Tutorial Abstracts, Student Research Workshops, Special Issues, and other such tracks out of the scope of measuring language usage trends in general NLP research. We are in the process of releasing the dataset along with the documentation. Conf / Class #Papers #Body #NoProc % Missing LREC 5835 15 6 0.1% WS 17844 337 332 1.86% CONLL 1035 0 0 0.0% EACL 1165 4 1 0.09% ACL 5776 46 29 0.5% TACL 280 7 0 0.0% CL 2025 88 0 0.0% NAACL 2188 2 1 0.05% COLING 4233 5 2 0.05% EMNLP 3865 16 16 0.41% Table 6: Dataset Statistics. A.3 Hyperparameter Tuning Our model has same hyperparameters as that of Word2Vec. To determine the optimal hyperparameters for the model, we take the entire dataset and split it into a 80-20 ratio, and given the embedding of a paper, the task is to predict the year in which the paper is published. Given this vector for a paper, we use a linear regression model such that given this vector, the model is supposed to predict the year in which the paper was published. We measured both R2 measure of variance in regression and mean absolute error (MAE). R2 is usually in the range of 0 to 1.00 (or 0 to 100%) where 1.00 is considered to be the best. MAE has no upper bound but the smaller it is the better, and 0 is its ideal value. We observed that our model does not show significant difference across any hyperparaeters except for the size of embeddings. The best dimension size for our embeddings is 75, and, we observed the corresponding R2 value of 0.6 and an MAE value of 4.04. A.4 Cosine distance between conferences and languages From Figure 6, we can see that languages are somewhat below the conferences are closer to some conferences while distant from others. To quantify this analysis, we compute the cosine distance between the conference vector and the mean of the vector each category of the taxonomy. Table 7 shows the cosine distance between the conferences and the each category of languages and we see a very similar trend that while ACL is an at average distance of 0.291 from category 5 languages, its almost more than double far away from category 2. There is also a very steep rise in distance of the ACL vector from category 4 to category 3. In fact, similar trends are visible for other ACL related conferences including EACL, NAACL, EMNLP and TACL. We can also see that in Table 7, WS and LREC are closest from category 2 to category 5 whereas almost all conferences are somewhat at the same distance from category, except the CL journal. The trend for category 0 languages seems somewhat different than the usual trend is this table, probably because of the large number of languages in this category as well as the sparsity in papers. Conf / Class 0 1 2 3 4 5 LREC 0.51 0.51 0.52 0.42 0.36 0.32 WS 0.50 0.55 0.53 0.40 0.28 0.21 CONLL 0.54 0.60 0.63 0.49 0.40 0.46 EACL 0.53 0.55 0.59 0.45 0.34 0.32 ACL 0.48 0.51 0.60 0.42 0.34 0.29 TACL 0.52 0.56 0.66 0.48 0.38 0.47 CL 0.67 0.78 0.80 0.75 0.65 0.59 NAACL 0.48 0.52 0.59 0.47 0.39 0.33 COLING 0.48 0.53 0.55 0.46 0.37 0.30 EMNLP 0.57 0.59 0.66 0.51 0.46 0.45 Table 7: Cosine Distance between conference vectors and mean class vectors of languages. 6293 A.5 Taxonomy classification We release our full language taxonomy classification on the website: https://microsoft.github.io/linguisticdiversity. A.6 Class-wise log(MRR) over the years per conference We plot MRR on a log scale for each conference to measure the progress of inclusion of the defined taxonomy classes over the years. It is very interesting to note how LREC has very smooth forward progression. (a) c = ACL (b) c = CL (c) c = COLING (d) c = CONLL (a) c = LREC (b) c = WS (c) c = EMNLP (d) c = NAACL (e) c = SEMEVAL
2020
560
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6294–6306 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6294 The Unstoppable Rise of Computational Linguistics in Deep Learning James Henderson Idiap Research Institute, Switzerland [email protected] Abstract In this paper, we trace the history of neural networks applied to natural language understanding tasks, and identify key contributions which the nature of language has made to the development of neural network architectures. We focus on the importance of variable binding and its instantiation in attention-based models, and argue that Transformer is not a sequence model but an induced-structure model. This perspective leads to predictions of the challenges facing research in deep learning architectures for natural language understanding. 1 Introduction When neural networks first started being applied to natural language in the 1980s and 90s, they represented a radical departure from standard practice in computational linguistics. Connectionists had vector representations and learning algorithms, and they didn’t see any need for anything else. Everything was a point in a vector space, and everything about the nature of language could be learned from data. On the other hand, most computational linguists had linguistic theories and the poverty-of-thestimulus argument. Obviously some things were learned from data, but all the interesting things about the nature of language had to be innate. A quarter century later, we can say two things with certainty: they were both wrong. Vector-space representations and machine learning algorithms are much more powerful than was thought. Much of the linguistic knowledge which computational linguists assumed needed to be innate can in fact be learned from data. But the unbounded discrete structured representations they used have not been replaced by vector-space representations. Instead, the successful uses of neural networks in computational linguistics have replaced specific pieces of computational-linguistic models with new neural network architectures which bring together continuous vector spaces with structured representations in ways which are novel for both machine learning and computational linguistics. Thus, the great progress which we have made through the application of neural networks to natural language processing should not be viewed as a conquest, but as a compromise. As well as the unquestionable impact of machine learning research on NLP, the nature of language has had a profound impact on progress in machine learning. In this paper we trace this impact, and speculate on future progress and its limits. We start with a sketch of the insights from grammar formalisms about the nature of language, with their multiple levels, structured representations and rules. The rules were soon learned with statistical methods, followed by the use of neural networks to replace symbols with induced vectors, but the most effective models still kept structured representations, such as syntactic trees. More recently, attention-based models have replaced hand-coded structures with induced structures. The resulting models represent language with multiple levels of structured representations, much as has always been done. Given this perspective, we identify remaining challenges in learning language from data, and its possible limitations. 2 Grammar Formalisms versus Connectionism 2.1 Grammar Formalisms Our modern understanding of the computational properties of language started with the introduction of grammar formalisms. Context Free Grammars (Chomsky, 1959) illustrated how a formal system could model the infinite generative capacity of language with a bounded grammar. This formalism soon proved inadequate to account for the diversity 6295 of phenomena in human languages, and a number of linguistically-motivated grammar formalisms were proposed (e.g HPSG (Pollard and Sag, 1987), TAG (Joshi, 1987), CCG (Steedman, 2000)). All these grammar formalisms shared certain properties, motivated by the understanding of the nature of languages in Linguistics. They all postulate representations which decompose an utterances into a set of sub-parts, with labels of the parts and a structure of inter-dependence between them. And they all assume that this decomposition happens at multiple levels of representation. For example that spoken utterances can be decomposed into sentences, sentences can be decomposed into words, words can be decomposed into morphemes, and morphemes can be decomposed into phonemes, before we reach the observable sound signal. In the interests of uniformity, we will refer to the subparts in each level of representation as its entities, their labels as their properties, and their structure of inter-dependence as their relations. The structure of inter-dependence between entities at different levels will also be referred to as relations. In addition to these representations, grammar formalisms include specifications of the allowable structures. These may take the form of hard constraints or soft objectives, or of deterministic rules or stochastic processes. In all cases, the purpose of these specifications is to account for the regularities found in natural languages. In the interests of uniformity, we will refer to all these different kinds of specifications of allowable structures as rules. These rules may apply within or between levels of representation. In addition to explicit rules, computational linguistic formalisms implicitly make claims about the regularities found in natural languages through their expressive power. Certain types of rules simply cannot be specified, thus claiming that such rules are not necessary to capture the regularities found in any natural language. These claims differ across formalisms, but the study of the expressive power of grammar formalisms have identified certain key principles (Joshi et al., 1990). Firstly, that the set of rules in a given grammar is bounded. This in turn implies that the set of properties and relations in a given grammar is also bounded. But language is unbounded1 in nature, since sentences and texts can be arbitrarily long. Grammar 1A set of things (e.g. the sentences of a language) have unbounded size if for any finite size there is always some element in the set which is larger than that. formalisms capture this unboundedness by allowing an unbounded number of entities in a representation, and thus an unbounded number of rule applications. It is generally accepted that the number of entities grows linearly with the length of the sentence (Joshi et al., 1990), so each level can have at most a number of entities which is linear in the number of entities at the level(s) below. Computational linguistic grammar formalisms also typically assume that the properties and relations are discrete, called symbolic representations. These may be atomic categories, as in CFGs, TAGs, CCG and dependency grammar, or they may be feature structures, as in HPSG. 2.2 Connectionism Other researchers who were more interested in the computational properties of neurological systems found this reliance on discrete categorical representations untenable. Processing in the brain used real-valued representations distributed across many neurons. Based on successes following the development of multi-layered perceptrons (MLPs) (Rumelhart et al., 1986b), an approach to modelling cognitive phenomena was developed called connectionism. Connectionism uses vector-space representations to reflect the distributed continuous nature of representations in the brain. Similarly, their rules are specified with vectors of continuous parameters. MLPs are so powerful that they are arbitrary function approximators (Hornik et al., 1989). And thanks to backpropagation learning (Rumelhart et al., 1986a) in neural network models, such as MLPs and Simple Recurrent Networks (SRNs) (Elman, 1990), these vector-space representations and rules could be learned from data. The ability to learn powerful vector-space representations from data led many connectionist to argue that the complex discrete structured representations of computational linguistics were neither necessary nor desirable (e.g. Smolensky (1988, 1990); Elman (1991); Miikkulainen (1993); Seidenberg (2007)). Distributed vector-space representations were thought to be so powerful that there was no need for anything else. Learning from data made linguistic theories irrelevant. (See also (Collobert and Weston, 2008; Collobert et al., 2011; Sutskever et al., 2014) for more recent incarnations.) The idea that vector-space representations are adequate for natural language and other cognitive phenomena was questioned from several directions. 6296 From neuroscience, researchers questioned how a simple vector could encode features of more than one thing at a time. If we see a red square together with a blue triangle, how do we represent the difference between that and a red triangle with a blue square, since the vector elements for red, blue, square and triangle would all be active at the same time? This is known as the variable binding problem, so called because variables are used to do this binding in symbolic representations, as in red(x) ∧triangle(x) ∧blue(y) ∧square(y). One proposal has been that the precise timing of neuron activation spikes could be used to encode variable binding, called Temporal Synchrony Variable Binding (von der Malsburg, 1981; Shastri and Ajjanagadde, 1993). Neural spike trains have both a phase and a period, so the phase could be used to encode variable binding while still allowing the period to be used for sequential computation. This work indicated how entities could be represented in a neurally-inspired computational architecture. The adequacy of vector-space representations was also questioned based on the regularities found in natural language. In particular, Fodor and Pylyshyn (1988) argued that connectionist architectures were not adequate to account for regularities which they characterised as systematicity (see also (Smolensky, 1990; Fodor and McLaughlin, 1990)). In essence, systematicity requires that learned rules generalise in a way that respects structured representations. Here again the issue is representing multiple entities at the same time, but with the additional requirement of representing the structural relationships between these entities. Only rules which are parameterised in terms of such representations can generalise in a way which accounts for the generalisations found in language. Early work on neural networks for natural language recognised the significance of variable binding for solving the issues with systematicity (Henderson, 1996, 2000). Henderson (1994, 2000) argued that extending neural networks with temporal synchrony variable binding made them powerful enough to account for the regularities found in language. Using time to encode variable bindings means that learning could generalise in a linguistically appropriate way (Henderson, 1996), since rules (neuronal synapses) learned for one variable (time) would systematically generalise to other variables. Although relations were not stored explicitly, it was claimed that for language understanding it is adequate to recover them from the features of the entities (Henderson, 1994, 2000). But these arguments were largely theoretical, and it was not clear how they could be incorporated in learning-based architectures. 2.3 Statistical Models Although researchers in computational linguistics did not want to abandon their representations, they did recognise the importance of learning from data. The first successes in this direction came from learning rules with statistical methods, such as part-of-speech tagging with hidden Markov models. For syntactic parsing, the development of the Penn Treebank led to many statistical models which learned the rules of grammar (Collins, 1997, 1999; Charniak, 1997; Ratnaparkhi, 1999). These statistical models were very successful at learning from the distributions of linguistic representations which had been annotated in the corpus they were trained on. But they still required linguistically-motivated designs to work well. In particular, feature engineering is necessary to make sure that these statistical machine-learning method can search a space of rules which is sufficiently broad to include good models but sufficiently narrow to allow learning from limited data. 3 Inducing Features of Entities Early work on neural networks for natural language recognised the potential of neural networks for learning the features as well, replacing feature engineering. But empirically successful neural network models for NLP were only achieved with approaches where the neural network was used to model one component within an otherwise traditional symbolic NLP model. The first work to achieve empirical success in comparison to non-neural statistical models was work on language modelling. Bengio et al. (2001, 2003) used an MLP to estimate the parameters of an n-gram language model, and showed improvements when interpolated with a statistical n-gram language model. A crucial innovation of this model was the introduction of word embeddings. The idea that the properties of a word could be represented by a vector reflecting the distribution of the word in text was introduced earlier in non-neural statistical models (e.g. (Deerwester et al., 1990; Sch¨utze, 1993; Burgess, 1998; Pad´o and Lapata, 2007; Erk, 2010)). This work showed that similarity in the 6297 PTB Constituents model LP LR F1 Costa et al. (2001) PoS 57.8 64.9 61.1 Henderson (2003) PoS 83.3 84.3 83.8 Henderson (2003) 88.8 89.5 89.1 Henderson (2004) 89.8 90.4 90.1 Vinyals et al. (2015) seq2seq <70 Vinyals et al. (2015) attn 88.3 Vinyals et al. (2015) seq2seq semisup 90.5 CoNLL09 Dependencies model (transition-based) UAS LAS Titov and Henderson (2007a)* 91.44 88.65 Chen and Manning (2014)* 89.17 86.49 Yazdani and Henderson (2015) 90.75 88.14 Stanford Dependencies model (transition-based) UAS LAS Chen and Manning (2014) 91.80 89.60 Dyer et al. (2015) 93.10 90.90 Andor et al. (2016) 94.61 92.79 Kiperwasser and Goldberg (2016) 93.9 91.9 Mohammadshahi and Henderson (2019) BERT 95.63 93.81 Table 1: Some neural network parsing results on Penn Treebank WSJ. LP/LR/F1: labelled constituent precision/recall/F-measure. UAS/LAS: unlabelled/labelled dependency accuracy. * results reported in (Yazdani and Henderson, 2015). resulting vector space is correlated with semantic similarity. Learning vector-space representations of words with neural networks (rather than SVD) have showed similar effects (e.g. (Turian et al., 2010; Mikolov et al., 2013; Levy et al., 2015; Pennington et al., 2014)), resulting in impressive improvements for many NLP tasks. More recent work has used neural network language models to learn context-dependent embeddings of words. We will refer to such contextdependent embeddings as token embeddings. For example, Peters et al. (2018) train a stacked BiLSTM language model, and these token embeddings have proved effective in many tasks. More such models will be discussed below. For syntactic parsing, early connectionist approaches (Jain, 1991; Miikkulainen, 1993; Ho and Chan, 1999; Costa et al., 2001) had limited success. The first neural network models to achieve empirical success used a recurrent neural network to model the derivation structure of a traditional syntactic constituency parser (Henderson, 2003, 2004). The recurrent neural network learns to model the sequence of parser actions, estimating the probability of the next parser action given the history of previous parser actions. This allows the decoding algorithm from the traditional parsing model to be used to efficiently search the space of possible parses. These models have also been applied to syntactic dependency parsing (Titov and Henderson, 2007b; Yazdani and Henderson, 2015) and joint syntactic-semantic dependency parsing (Henderson et al., 2013). Crucially, these neural networks do not model the sequence of parser decisions as a flat sequence, but instead model the derivation structure it specifies. A derivation structure includes relationships for the inter-dependencies between nodes in the parse tree. The pattern of interconnections between hidden layers of the recurrent neural network (henceforth referred to as the model structure) is designed to follow locality in this derivation structure, thereby giving the neural network a linguistically appropriate inductive bias. More recently, Dyer et al. (2015) provide a more direct relationship between the derivation structure and the model structure with their StackLSTM parsing model. In all these models, the use of recurrent neural networks allows arbitrarily large parse structures to be modelled without making any hard independence assumptions, in contrast to non-neural statistical models. Feed-forward neural networks have also been applied to modelling the derivation structure (Chen and Manning, 2014), but the accuracy is worse than using recurrent models (see Table 1), presumably because such models suffer from the need to make hard independence assumptions. Representing the parse tree as a derivation sequence, rather than a derivation structure, makes it possible to define syntactic parsing as a sequenceto-sequence problem, mapping the sentence to its parse sequence. If a neural network architecture for modelling sequences (called seq2seq models) can perform well at this task, then maybe the structured linguistic representations of natural language are not necessary (contrary to Fodor and Pylyshyn (1988)), not even to predict those structures. Vinyals et al. (2015) report very poor results for seq2seq models when trained on the standard dataset, but good results when trained on very large automatically-parsed corpora (see Table 1 semisup). They only achieve good results with the limited standard dataset by adding attention, which we will argue below makes the model no longer a seq2seq model. This indicates that structured representations really do capture important generalisations about language.2 2See (Collobert and Weston, 2008; Collobert et al., 2011) for an earlier related line of work. 6298 In contrast to seq2seq models, there have also been neural network models of parsing which directly represent linguistic structure, rather than just derivation structure, giving them induced vector representations which map one-to-one with the entities in the linguistic representation. Typically, a recursive neural network is used to compute embeddings of syntactic constituents bottom-up. Dyer et al. (2015) showed improvements by adding these representations to a model of the derivation structure. Socher et al. (2013a) only modelled the linguistic structure, making it difficult to do decoding efficiently. But the resulting induced constituent embeddings have a clear linguistic interpretation, making it easier to use them within other tasks, such as sentiment analysis (Socher et al., 2013b). Similarly, models based on Graph Convolutional Networks have induced embeddings with clear linguistic interpretations within pre-defined model structures (e.g. (Marcheggiani and Titov, 2017; Marcheggiani et al., 2018)). All these results demonstrate the incredible effectiveness of inducing vector-space representations with neural networks, relieving us from the need to do feature engineering. But neural networks do not relieve us of the need to understand the nature of language when designing our models. Instead of feature engineering, these results show that the best accuracy is achieved by engineering the inductive bias of deep learning models through their model structure. By designing a hand-coded model structure which reflects the linguistic structure, locality in the model structure can reflect locality in the linguistic structure. The neural network then induces features of the entities in this model structure. 4 Inducing Relations between Entities With the introduction of attention-based models, the model structure can now be learned. By choosing the nodes to be linguistically-motivated entities, learning the model structure in effect learns the statistical inter-dependencies between entities, which is what we have been referring to as relations. 4.1 Attention-Based Models and Variable Binding The first proposal of an attention-based neural model learned a soft alignment between the target and source words in neural machine translation (NMT) (Bahdanau et al., 2015). The model structure of the source sentence encoder and the model structure of the target sentence decoder are both flat sequences, but when each target word is generated, it computes attention weights over all source words. These attention weights directly express how target words are correlated with source words, and in this sense can be seen as a soft version of the alignment structure. In traditional statistical machine translation, this alignment structure is determined with a separate alignment algorithm, and then frozen while training the model. In contrast, the attentionbased NMT model learns the alignment structure jointly with learning the encoder and decoder, inside the deep learning architecture (Bahdanau et al., 2015). This attention-based approach to NMT was also applied to mapping a sentence to its syntactic parse (Vinyals et al., 2015). The attention function learns the structure of the relationship between the sentence and its syntactic derivation sequence, but does not have any representation of the structure of the syntactic derivation itself. Empirical results are much better than their seq2seq model (Vinyals et al., 2015), but not as good as models which explicitly model both structures (see Table 1). The change from the sequential LSTM decoders of previous NMT models to LSTM decoders with attention seems like a simple addition, but it fundamentally changes the kinds of generalisations which the model is able to learn. At each step in decoding, the state of a sequential LSTM model is a single vector, whereas adding attention means that the state needs to include the unboundedly large set of vectors being attended to. This use of an unbounded state is more similar to the above models with predefined model structure, where an unboundedly large stack is needed to specify the parser state. This change in representation leads to a profound change in the generalisations which can be learned. Parameterised rules which are learned when paying attention to one of these vectors (in the set or in the stack) automatically generalise to the other vectors. In other words, attention-based models have variable binding, which sequential LSTMs do not. Each vector represents the features for one entity, multiple entities can be kept in memory at the same time, and rules generalise across these entities. In this sense it is wrong to refer to attention-based models as sequence models; they are in fact induced-structure models. We will expand on this perspective in the rest of this section. 6299 4.2 Transformer and Systematicity The generality of attention as a structure-induction method soon became apparent, culminated in the development of the Transformer architecture (Vaswani et al., 2017). Transformer has multiple stacked layers of self-attention (attention to the other words in the same sequence), interleaved with nonlinear functions applied to individual vectors. Each attention layer has multiple attention heads, allowing each head to learn a different type of relation. A Transformer-encoder has one column of stacked vectors for each position in the input sequence, and the model parameters are shared across positions. A Transformer-decoder adds attention over an encoded text, and predicts words one at a time after encoding the prefix of previously generated words. Although it was developed for encoding and generating sequences, in Transformer the sequential structure is not hard-coded into the model structure, unlike previous models of deep learning for sequences (e.g. LSTMs (Hochreiter and Schmidhuber, 1997) and CNNs (LeCun and Bengio, 1995)). Instead, the sequential structure is input in the form of position embeddings. In our formulation, position embeddings are just properties of individual entities (typically words or subwords). As such, these inputs facilitate learning about absolute positions. But they are also designed to allow the model to easily calculate relative position between entities. This allows the model’s attention functions to learn to discover the relative position structure of the underlying sequence. In fact, explicitly inputting relative position relations as embeddings into the attention functions works even better (Shaw et al., 2018) (discussed further below). Whether input as properties or as relations, these inputs are just features, not hard-coded model structure. The attention weight functions can then learn to use these features to induce their own structure. The appropriateness and generality for natural language of the Transformer architecture became even more apparent with the development of pretrained Transformer models like BERT (Devlin et al., 2019). BERT models are large Transformer models trained mostly on a masked language model objective, as well as a next-sentence prediction objective. After training on a very large amount of unlabelled text, the resulting pretrained model can be fine tuned for various tasks, with very impressive improvements in accuracy across a wide variety of tasks. The success of BERT has led to various analyses of what it has learned, including the structural relations learned by the attention functions. Although there is no exact mapping from these structures to the structures posited by linguistics, there are clear indications that the attention functions are learning to extract linguistic relations (Voita et al., 2019; Tenney et al., 2019; Reif et al., 2019). With variable binding for the properties of entities and attention functions for relations between entities, Transformer can represent the kinds of structured representations argued for above. With parameters shared across entities and sensitive to these properties and relations, learned rules are parameterised in terms of these structures. Thus Transformer is a deep learning architecture with the kind of generalisation ability required to exhibit systematicity, as in (Fodor and Pylyshyn, 1988). Interestingly, the relations are not stored explicitly. Instead they are extracted from pairs of vectors by the attention functions, as with the use of position embeddings to compute relative position relations. For the model to induce its own structure, lower levels must learn to embed its relations in pairs of token embeddings, which higher levels of attention then extract. That Transformer learns to embed relations in pairs of token embeddings is apparent from recent work on dependency parsing (Kondratyuk and Straka, 2019; Mohammadshahi and Henderson, 2019, 2020). Earlier models of dependency parsing successfully use BiLSTMs to embed syntactic dependencies in pairs of token embeddings (e.g. (Kiperwasser and Goldberg, 2016; Dozat and Manning, 2016)), which are then extracted to predict the dependency tree. Mohammadshahi and Henderson (2019, 2020) use their proposed Graphto-Graph Transformer to encode dependencies in pairs of token embeddings, for transition-based and graph-based dependency parsing respectively. Graph-to-Graph Transformer also inputs previously predicted dependency relations into its attention functions (like relative position encoding (Shaw et al., 2018)). These parsers achieve state of the art accuracies, indicating that Transformer finds it easy to input and predict syntactic dependency relations via pairs of token embeddings. Interestingly, initialising the model with pretrained BERT results in large improvements, indicating that BERT representations also encode syntactically-relevant 6300 relations in pairs of token embeddings. 4.3 Nonparametric Representations As we have seen, the problem with vector-space models is not simply about representations, but about the way learned rules generalise. In work on grammar formalisms, generalisation is analysed by looking at the unbounded case, since any bounded case can simply be memorised. But the use of continuous representations does not fit well with the theory of grammar formalisms, which assumes a bounded vocabulary of atomic categories. Instead we propose an analysis of the generalisation abilities of Transformer in terms of theory from machine learning, Bayesian nonparametric learning (Jordan, 2010). We argue that the representations of Transformer are the minimal nonparametric extension of a vector space. To connect Transformer to Bayesian probabilities, we assume that a Transformer representation can be thought of as the parameters of a probability distribution. This is natural, since a model’s state represents a belief about the input, and in Bayesian approaches beliefs are probability distributions. From this perspective, computing a representation is inferring the parameters of a probability distribution from the observed input. This is analogous to Bayesian learning, where we infer the parameters of a distribution over models from observed training data. In this section, we outline how theory from Bayesian learning helps us understand how the representations of Transformer lead to better generalisation. We do not make any specific assumptions about what probability distributions are specified by a Transformer representation, but it is useful to keep in mind an example. One possibility is a mixture model, where each vector specifies the parameters of a multi-dimensional distribution, and the total distribution is the weighted sum across the vectors of these distributions. For example, we can interpret the vectors x=x1, . . . , xn in a Transformer’s representation as specifying a belief about the queries q that will be received from a downstream attention function, as in: P(q|x) = X i P(i|x) P(q|xi) P(i|x) = exp( 1 2||xi||2) / X i exp( 1 2||xi||2) P(q|xi) = N(q ; µ=xi, σ=1) With this interpretation of x, we can use the fact that P(i|x, q) ∝P(i|x) P(q|xi) ∝exp(q·xi) (ignoring factors independent of i) to reinterpret a standard attention function. Since Transformer has a discrete segmentation of its representation into positions (which we call entities), but no explicit representation of structure, we can think of this representation as a bag of vectors (BoV, i.e. a set of instances of vectors). Each layer has a BoV representation, which is aligned with the BoV representation below it. The final output only becomes a sequence if the downstream task imposes explicit sequential structure on it, which attention alone does not. These bag of vector representations have two very interesting properties for natural language. First, the number of vectors in the bag can grow arbitrarily large, which captures the unbounded nature of language. Secondly, the vectors in the bag are exchangeable, in the sense of Jordan (2010). In other words, renumbering the indices used to refer to the different vectors will not change the interpretation of the representation.3 This is because the learned parameters in Transformer are shared across all positions. These two properties are clearly related; exchangeability allows learning to generalise to unbounded representations, since there is no need to learn about indices which are not in the training data. These properties mean that BoV representations are nonparametric representations. In other words, the specification of a BoV representation cannot be done just by choosing values for a fixed set of parameters. The number of parameters you need grows with the size of the bag. This is crucial for language because the amount of information conveyed by a text grows with the length of the text, so we need nonparametric representations. To illustrate the usefulness of this view of BoVs as nonparametric representations, we propose to use methods from Bayesian learning to define a prior distribution over BoVs where the size of the bag is not known. Such a prior would be needed for learning the number of entities in a Transformer representation, discussed below, using variational Bayesian approaches. For this example, we will use the above interpretation of a BoV x={xi | 1≤i≤k} as specifying a distribution over queries, P(q|x)= P i P(i|x)P(q|xi). A prior distribution over these P(q|x) distributions can be 3These indices should not be confused with position embeddings. In fact, position embeddings are needed precisely because the indices are meaningless to the model. 6301 specified, for example, with a Dirichlet Process, DP(α, G0). The concentration parameter α controls the generation of a sequence of probabilities ρ1, ρ2, . . ., which correspond to the P(i|x) distribution (parameterised by the ||xi||). The base distribution G0 controls the generation of the P(q|xi) distributions (parameterised by the xi). The use of exchangeability to support generalisation to unbounded representations implies a third interesting property, discrete segmentation into entities. In other words, the information in a BoV is spread across an integer number of vectors. A vector cannot be half included in a BoV; it is either included or not. In changing from a vector space to a bag-of-vector space, the only change is this discrete segmentation into entities. In particular, no discrete representation of structure is added to the representation. Thus, the BoV representation of Transformer is the minimal nonparametric extension of a vector space. With this minimal nonparametric extension, Transformer is able to explicitly represent entities and their properties, and implicitly represent a structure of relations between these entities. The continuing astounding success of Transformer in natural language understanding tasks suggests that this is an adequate deep learning architecture for the kinds of structured representations needed to account for the nature of language. 5 Looking Forward: Inducing Levels and their Entities As argued above, the great success of neural networks in NLP has not been because they are radically different from pre-neural computational theories of language, but because they have succeeded in replacing hand-coded components of those models with learned components which are specifically designed to capture the same generalisations. We predict that there is at least one more hand-coded aspect of these models which can be learned from data, but question whether they all can be. Transformer can learn representations of entities and their relations, but current work (to the best of our knowledge) all assumes that the set of entities is a predefined function of the text. Given a sentence, a Transformer does not learn how many vectors it should use to represent it. The number of positions in the input sequence is given, and the number of token embeddings is the same as the number of input positions. When a Transformer decoder generates a sentence, the number of positions is chosen by the model, but it is simply trying to guess the number of positions that would have been given if this was a training example. These Transformer models never try to induce the number of token embeddings they use in an unsupervised way.4 Given that current models hard-code different token definitions for different tasks (e.g. character embeddings versus word embeddings versus sentence embeddings), it is natural to ask whether a specification of the set of entities at a given level of representation can be learned. There are models which induce the set of entities in an input text, but these are (to the best of our knowledge) not learned jointly with a downstream deep learning model. Common examples include BPE (Sennrich et al., 2016) and unigram language model (Kudo, 2018), which use statistics of character n-grams to decide how to split words into subwords. The resulting subwords then become the entities for a deep learning model, such as Transformer (e.g. BERT), but they do not explicitly optimise the performance of this downstream model. In a more linguisticallyinformed approach to the same problem, statistical models have been proposed for morphology induction (e.g. (Elsner et al., 2013)). Also, Semi-Markov CRF models (Sarawagi and Cohen, 2005) can learn segmentations of an input string, which have been used in the output layers of neural models (e.g. (Kong et al., 2015)). The success of these models in finding useful segmentations of characters into subwords suggests that learning the set of entities can be integrated into a deep learning model. But this task is complicated by the inherently discrete nature of the segmentation into entities. It remains to find effective neural architectures for learning the set of entities jointly with the rest of the neural model, and for generalising such methods from the level of character strings to higher levels of representation. The other remaining hand-coded component of computational linguistic models is levels of representation. Neural network models of language typically only represent a few levels, such as the character sequence plus the word sequence, the word sequence plus the syntax tree, or the word sequence plus the syntax tree plus the predicate-argument structure (Henderson et al., 2013; Swayamdipta 4Recent work on inducing sparsity in attention weights (Correia et al., 2019) effectively learns to reduce the number of entities used by individual attention heads, but not by the model as a whole. 6302 et al., 2016). And these levels and their entities are defined before training starts, either in preprocessing or in annotated data. If we had methods for inducing the set of entities at a given level (discussed above), then we could begin to ask whether we can induce the levels themselves. One common approach to inducing levels of representation in neural models is to deny it is a problem. Seq2seq and end2end models typically take this approach. These models only include representations at a lower level, both for input and output, and try to achieve equivalent performance to models which postulate some higher level of representation (e.g. (Collobert and Weston, 2008; Collobert et al., 2011; Sutskever et al., 2014; Vinyals et al., 2015)). The most successful example of this approach has been neural machine translation. The ability of neural networks to learn such models is impressive, but the challenge of general natural language understanding is much greater than machine translation. Nonetheless, models which do not explicitly model levels of representation can show that they have learned about different levels implicitly (Peters et al., 2018; Tenney et al., 2019). We think that it is far more likely that we will be able to design neural architectures which induce multiple levels of representation than it is that we can ignore this problem entirely. However, it is not at all clear that even this will be possible. Unlike the components previously learned, no linguistic theory postulates different levels of representation for different languages. Generally speaking, there is a consensus that the levels minimally include phonology, morphology, syntactic structure, predicate-argument structure, and discourse structure. This language-universal nature of levels of representation suggests that in humans the levels of linguistic representation are innate. This draws into question whether levels of representation can be learned at all. Perhaps they are innate because human brains are not able to learn them from data. If so, perhaps it is the same for neural networks, and so attempts to induce levels of representation are doomed to failure. Or perhaps we can find new neural network architectures which are even more powerful than what is now thought possible. It wouldn’t be the first time! 6 Conclusions We conclude that the nature of language has influenced the design of deep learning architectures in fundamental ways. Vector space representations (as in MLPs) are not adequate, nor are vector spaces which evolve over time (as in LSTMs). Attentionbased models are fundamentally different because they use bag-of-vector representations. BoV representations are nonparametric representations, in that the number of vectors in the bag can grow arbitrarily large, and these vectors are exchangeable. With BoV representations, attention-based neural network models like Transformer can model the kinds of unbounded structured representations that computational linguists have found to be necessary to capture the generalisations in natural language. And deep learning allows many aspects of these structured representations to be learned from data. However, successful deep learning architectures for natural language currently still have many handcoded aspects. The levels of representation are hand-coded, based on linguistic theory or available resources. Often deep learning models only address one level at a time, whereas a full model would involve levels ranging from the perceptual input to logical reasoning. Even within a given level, the set of entities is a pre-defined function of the text. This analysis suggests that an important next step in deep learning architectures for natural language understanding will be the induction of entities. It is not clear what advances in deep learning methods will be necessary to improve over our current fixed entity definitions, nor whether the resulting entities will be any different from the ones postulated by linguistic theory. If we can induce the entities at a given level, a more challenging task will be the induction of the levels themselves. The presumably-innate nature of linguistic levels suggests that this might not even be possible. But of one thing we can be certain: the immense success of adapting deep learning architectures to fit with our computational-linguistic understanding of the nature of language will doubtless continue, with greater insights for both natural language processing and machine learning. Acknowledgements We would like to thank Paola Merlo, Suzanne Stevenson, Ivan Titov, members of the Idiap NLU group, and the anonymous reviewers for their comments and suggestions. 6303 References Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2442–2452, Berlin, Germany. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR. Yoshua Bengio, R´ejean Ducharme, and Pascal Vincent. 2001. A neural probabilistic language model. In Advances in Neural Information Processing Systems 13, pages 932–938. MIT Press. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. J. Machine Learning Research, 3:1137–1155. Curt Burgess. 1998. From simple associations to the building blocks of language: Modeling meaning in memory with the HAL model. Behavior Research Methods, Instruments, & Computers, 30(2):188– 198. Eugene Charniak. 1997. Statistical parsing with a context-free grammar and word statistics. In Proc. 14th National Conference on Artificial Intelligence, Providence, RI. AAAI Press/MIT Press. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740–750, Doha, Qatar. Association for Computational Linguistics. Noam Chomsky. 1959. On certain formal properties of grammars. Information and Control, 2:137–167. Michael Collins. 1997. Three generative, lexicalized models for statistical parsing. In Proc. 35th Meeting of Association for Computational Linguistics and 8th Conf. of European Chapter of Association for Computational Linguistics, pages 16–23, Somerset, New Jersey. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. In Proceedings of the Twenty-Fifth International Conference (ICML 2008), pages 160–167, Helsinki, Finland. Gonc¸alo M. Correia, Vlad Niculae, and Andr´e F. T. Martins. 2019. Adaptively sparse transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2174– 2184, Hong Kong, China. Association for Computational Linguistics. Fabrizio Costa, Vincenzo Lombardo, Paolo Frasconi, and Giovanni Soda. 2001. Wide coverage incremental parsing by learning attachment preferences. pages 297–307. Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391–407. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Timothy Dozat and Christopher D. Manning. 2016. Deep biaffine attention for neural dependency parsing. CoRR, abs/1611.01734. ICLR 2017. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 334–343, Beijing, China. Association for Computational Linguistics. Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179–212. Jeffrey L. Elman. 1991. Distributed representations, simple recurrent networks, and grammatical structure. Machine Learning, 7:195–225. Micha Elsner, Sharon Goldwater, Naomi Feldman, and Frank Wood. 2013. A joint learning model of word segmentation, lexical acquisition, and phonetic variability. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 42–54, Seattle, Washington, USA. Association for Computational Linguistics. Katrin Erk. 2010. What is word meaning, really? (and how can distributional models help us describe it?). 6304 In Proceedings of the 2010 Workshop on GEometrical Models of Natural Language Semantics, pages 17–26, Uppsala, Sweden. Association for Computational Linguistics. Jerry A. Fodor and B. McLaughlin. 1990. Connectionism and the problem of systematicity: Why smolensky’s solution doesn’t work. Cognition, 35:183– 204. Jerry A. Fodor and Zenon W. Pylyshyn. 1988. Connectionism and cognitive architecture: A critical analysis. Cognition, 28:3–71. James Henderson. 1994. Description Based Parsing in a Connectionist Network. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA. Technical Report MS-CIS-94-46. James Henderson. 1996. A connectionist architecture with inherent systematicity. In Proceedings of the Eighteenth Conference of the Cognitive Science Society, pages 574–579, La Jolla, CA. James Henderson. 2000. Constituency, context, and connectionism in syntactic parsing. In Matthew Crocker, Martin Pickering, and Charles Clifton, editors, Architectures and Mechanisms for Language Processing, pages 189–209. Cambridge University Press, Cambridge UK. James Henderson. 2003. Inducing history representations for broad coverage statistical parsing. In Proc. joint meeting of North American Chapter of the Association for Computational Linguistics and the Human Language Technology Conf., pages 103–110, Edmonton, Canada. James Henderson. 2004. Discriminative training of a neural network statistical parser. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, pages 95–102, Barcelona, Spain. James Henderson, Paola Merlo, Ivan Titov, and Gabriele Musillo. 2013. Multilingual joint parsing of syntactic and semantic dependencies with a latent variable model. Computational Linguistics, 39(4):949–998. E.K.S. Ho and L.W. Chan. 1999. How to design a connectionist holistic parser. Neural Computation, 11(8):1995–2016. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. K. Hornik, M. Stinchcombe, and H. White. 1989. Multilayer feedforward networks are universal approximators. Neural Networks, 2:359–366. Ajay N. Jain. 1991. PARSEC: A Connectionist Learning Architecture for Parsing Spoken Language. Ph.D. thesis, Carnegie Mellon University, Pittsburgh, PA. M.I. Jordan. 2010. Bayesian nonparametric learning: Expressive priors for intelligent systems. In R. Dechter, H. Geffner, and J. Halpern, editors, Heuristics, Probability and Causality: A Tribute to Judea Pearl, chapter 10. College Publications. Aravind K. Joshi. 1987. An introduction to tree adjoining grammars. In Alexis Manaster-Ramer, editor, Mathematics of Language. John Benjamins, Amsterdam. Aravind K. Joshi, K. Vijay-Shanker, and David Weir. 1990. The convergence of mildly context-sensitive grammatical formalisms. In Peter Sells, Stuart Shieber, and Tom Wasow, editors, Foundational Issues in Natural Language Processing. MIT Press, Cambridge MA. Forthcoming. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313–327. Dan Kondratyuk and Milan Straka. 2019. 75 languages, 1 model: Parsing universal dependencies universally. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 2779–2795, Hong Kong, China. Association for Computational Linguistics. Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2015. Segmental recurrent neural networks. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66–75, Melbourne, Australia. Association for Computational Linguistics. Yann LeCun and Yoshua Bengio. 1995. Convolutional networks for images, speech, and time-series. In Michael A. Arbib, editor, The handbook of brain theory and neural networks (Second ed.), page 276–278. MIT press. Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225. C. von der Malsburg. 1981. The correlation theory of brain function. Technical Report 81-2, Max-PlanckInstitute for Biophysical Chemistry, Gottingen. Diego Marcheggiani, Joost Bastings, and Ivan Titov. 2018. Exploiting semantics in neural machine translation with graph convolutional networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 486–492, New Orleans, 6305 Louisiana. Association for Computational Linguistics. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1506–1515, Copenhagen, Denmark. Association for Computational Linguistics. Risto Miikkulainen. 1993. Subsymbolic Natural Language Processing: An integrated model of scripts, lexicon, and memory. MIT Press, Cambridge, MA. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111–3119. Curran Associates, Inc. Alireza Mohammadshahi and James Henderson. 2019. Graph-to-graph transformer for transition-based dependency parsing. Alireza Mohammadshahi and James Henderson. 2020. Recursive non-autoregressive graph-to-graph transformer for dependency parsing with iterative refinement. Sebastian Pad´o and Mirella Lapata. 2007. Dependencybased construction of semantic space models. Computational Linguistics, 33(2):161–199. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Carl Pollard and Ivan A. Sag. 1987. Information-Based Syntax and Semantics. Vol 1: Fundamentals. Center for the Study of Language and Information, Stanford, CA. Adwait Ratnaparkhi. 1999. Learning to parse natural language with maximum entropy models. Machine Learning, 34:151–175. Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B Viegas, Andy Coenen, Adam Pearce, and Been Kim. 2019. Visualizing and measuring the geometry of bert. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8594–8603. Curran Associates, Inc. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. 1986a. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing, Vol 1, pages 318–362. MIT Press, Cambridge, MA. D. E. Rumelhart, J. L. McClelland, and the PDP Reseach group. 1986b. Parallel Distributed Processing: Explorations in the microstructure of cognition, Vol 1. MIT Press, Cambridge, MA. Sunita Sarawagi and William W Cohen. 2005. Semimarkov conditional random fields for information extraction. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17, pages 1185–1192. MIT Press. Hinrich Sch¨utze. 1993. Word space. In Advances in Neural Information Processing Systems 5, pages 895–902. Morgan Kaufmann. Mark S. Seidenberg. 2007. Connectionist models of reading. In Gareth Gaskell, editor, Oxford Handbook of Psycholinguistics, chapter 14, pages 235– 250. Oxford University Press. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725, Berlin, Germany. Association for Computational Linguistics. Lokendra Shastri and Venkat Ajjanagadde. 1993. From simple associations to systematic reasoning: A connectionist representation of rules, variables, and dynamic bindings using temporal synchrony. Behavioral and Brain Sciences, 16:417–451. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464–468, New Orleans, Louisiana. Association for Computational Linguistics. Paul Smolensky. 1988. On the proper treatment of connectionism. Behavioral and Brain Sciences, 11:1– 17. Paul Smolensky. 1990. Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artificial Intelligence, 46(12):159–216. Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013a. Parsing with compositional vector grammars. In Proceedings of the 6306 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 455–465, Sofia, Bulgaria. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Mark Steedman. 2000. The Syntactic Process. MIT Press, Cambridge. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Swabha Swayamdipta, Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2016. Greedy, joint syntacticsemantic parsing with stack LSTMs. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 187–197, Berlin, Germany. Association for Computational Linguistics. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593– 4601, Florence, Italy. Association for Computational Linguistics. Ivan Titov and James Henderson. 2007a. A latent variable model for generative dependency parsing. In Proceedings of the Tenth International Conference on Parsing Technologies, pages 144–155, Prague, Czech Republic. Association for Computational Linguistics. Ivan Titov and James Henderson. 2007b. A latent variable model for generative dependency parsing. In Proceedings of the International Conference on Parsing Technologies (IWPT’07), Prague, Czech Republic. Association for Computational Linguistics. Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384–394, Uppsala, Sweden. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc. Oriol Vinyals, Ł ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2773–2781. Curran Associates, Inc. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5797–5808, Florence, Italy. Association for Computational Linguistics. Majid Yazdani and James Henderson. 2015. Incremental recurrent neural network dependency parser with search-based discriminative training. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, pages 142–152, Beijing, China. Association for Computational Linguistics.
2020
561
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6307–6321 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6307 To Boldly Query What No One Has Annotated Before? The Frontiers of Corpus Querying Markus G¨artner, Kerstin Jung Institute for Natural Language Processing University of Stuttgart {markus.gaertner,kerstin.jung}@ims.uni-stuttgart.de Abstract Corpus query systems exist to address the multifarious information needs of any person interested in the content of annotated corpora. In this role they play an important part in making those resources usable for a wider audience. Over the past decades, several such query systems and languages have emerged, varying greatly in their expressiveness and technical details. This paper offers a broad overview of the history of corpora and corpus query tools. It focusses strongly on the query side and hints at exciting directions for future development. 1 Introduction Annotated corpora have always been the backbone for many fields in NLP and other disciplines related to linguistics. Whether serving as an invaluable source of empirical evidence for foundational research or doubling as gold-standard training input for fueling the furnaces of our machine learning factories, their importance cannot be overemphasized. But especially for the empirically motivated user base, corpora are only ever as good as the means available to explore them. And the primary means of exploring linguistically annotated corpora have always been (dedicated) corpus query tools and corpus query languages in their manifold shapes. In this paper we intend to give a thorough chronology of the major interplay between corpus progression and query tool evolution, with a strong focus on the latter. We start with an overview on relevant aspects of corpora and how they changed over the past ~30 years in Section 2. Section 3 elaborates on the observable phases in query tool development. In Section 4 we discuss alternative corpus query approaches based on general purpose data(base) management solutions and provide pointers to related work in Section 5. Section 6 summarizes some of our observations and with Section 7 we finally hint at our vision for future directions in corpus query system development. 2 Once Upon a Corpus – Trends in Corpus Evolution Though corpus linguistics dates back further, major online catalogs such as those from LDC1 and ELRA2 list corpora starting from the early 1990s. In the following decades corpus trends have varied along several dimensions, both technical and content-related. This section discusses such features and gives examples for their evolution. Since this overview is an introduction to digital corpus query systems, we mainly focus on written and annotated corpora. With a focus on written corpora, character encoding is a decisive factor when estimating the publication date. Starting from plain ASCII (Everts, 20003, Graff and Cieri, 2003) and language/script specific encodings, such as ISO/IEC 8859 (Armstrong-Warwick et al., 1994; Federico et al., 2000), nowadays many corpora come with a (mostly) language independent UTF-8 encoding (Ion et al. (2012); Prasad et al. (2019) and compare Sch¨afer (2015) with Sch¨afer and Bildhauer (2012)), which is also able to capture symbols relevant for transcription and annotation. Similar to character encoding, the preferences regarding the representation format for corpus content changed over time. Many corpora established in the 1990s come in an SGML format (Liberman, 1989; Amaryllis, 2001; Graff, 1995). In the next decade, XML-based corpora followed (Chiao et al. (2006) and compare Hajiˇc et al. (2001) and Pajas 1Linguistic Data Consortium, https://catalog. ldc.upenn.edu/ 2European Language Resources Association, http:// catalogue.elra.info/ 3Earlier version published 1997 by ELRA: ISLRN 628817-117-400-1 6308 and ˇStˇep´anek (2005)) and since corpora were also made accessible over the web, relational database management systems (RDBMSs) became a valuable backend for corpus storage (Davies, 2005). Today we face a multitude of formats ranging from sophisticated and specialized XML encodings to simple tabular formats and often a corpus comes with more than one representation (Petran et al., 2016; Bick, 2018). Especially since the first CoNLL shared tasks4, their tabular format to encode sequence-based annotations and relations has been majorly developed (Nivre et al., 2016). Regarding included languages, multilingual and (partly) parallel corpora appear early (Liberman, 1989; Armstrong-Warwick et al., 1994; Graff and Finch, 1994), however, there was a rise of parallel corpora in the first decade of the current century. Prominent examples are Europarl (Koehn, 2005), the CESTA Evaluation Package (Hamon et al., 2006) and the Prague Czech-English Dependency Treebank 1.0 (Cmejrek et al., 2005). On the other hand, with the rise of web corpora, language detection became more important to only crawl (or keep) web data for a specific language. Corpus size is a less discriminative factor than one might think, since many early corpora came as collections of sub-corpora. Armstrong-Warwick et al. (1994) already contains 90 million words and LDC’s Gigaword initiative started in 2003 (Graff and Cieri, 2003), while many small corpora for specific topics or containing manual annotations are constantly being created. Nevertheless, with recent web corpora, e.g. ENCOW165 and iWEB6, several billion tokens pose new challenges for the design of both storage and search facilities. While for spoken corpora domain selection is often tailored to the research question at hand (cf. Talkbank (MacWhinney et al., 2004)), for written corpora (and especially annotated ones) there is a bias towards news and official documents, which was superseded by multi-domain web corpora starting in the late 2000s (e.g. the WaCKy initiative (Baroni et al., 2009) and COW) and, in the follow up, the increasing number of corpora of computermediated communication and social media7. Like 4https://www.conll.org/previous-tasks 5COrpora from the Web (COW), English sub-corpus, https://corporafromtheweb.org/ 6https://www.english-corpora.org/ 7Annual conference on computer-mediated communication and social media corpora started in 2013 https:// sites.google.com/site/cmccorpora/ with the language setting, for web-corpora the challenge is no longer to include more languages or domains, but to identify and/or restrict them to a sensible subset. Collections of historical language data have also been available for some time, e.g. the Corpus of Middle English Prose and Verse8 and with the rise of the Digital Humanities many further corpora are created and/or enhanced with linguistic annotations, such as the Drama Corpora Project9, where some corpora have been enhanced with lemma information. Most corpora come with annotations, the earlier ones mainly with flat and word-based annotations, mostly including part-of-speech, such as the ECIELSNET Italian & German tagged sub-corpus10. Regarding the structural aspect, stand-off syntactic annotations became more feasible with emerging treebanks, while over time the focus changed from phrase-based (Brants et al., 2004) to dependency tree structures (Hajiˇc et al., 2001). The current decade has also seen an increase in the richness of annotation layers of morphological, syntactical and semantical description, including highly concurrent annotations belonging to the same description layer, e.g. Ide et al. (2010) or Schweitzer et al. (2018). 3 A Brief History of Querying We observed three major phases or generations in the history of corpus query systems, which are roughly aligned to the last three decades. The following is meant as a comprehensive but not exhaustive chronology of corpus query systems and approaches. Space does not permit we provide indepth descriptions for every system mentioned but instead refer to Section 5 for pointers to existing work that discusses and compares certain (families of) query systems in detail. 3.1 First Generation – Humble Beginnings The history of corpus querying systems has been for the most part tightly connected to the gradual expansion of the targeted corpus resources. As such the initial wave of corpus query tools during the 1990s was mostly geared towards text corpora: The COSMAS11 lineage remains until today12 8https://quod.lib.umich.edu/c/cme/ 9https://dracor.org/ 10ISLRN 869-857-775-378-7 11Corpus Search, Management and Analysis System, http://www.ids-mannheim.de/cosmas2/ 12The initial version COSMAS I has been in continuous service from 1992 till 2003 and COSMAS II ever since 2002 6309 the public query front-end for the large corpus collection hosted at the IDS (Bodmer, 2005), offering keyword in context (KWIC) visualization in a browser-frontend and various query constraints. In contrast the Linguistic DataBase program (LDB) (Halteren and Heuvel, 1990) features a very expressive tree-based query syntax and also ships with a tree editor. In addition it provides an ingenious event-based approach for extracting information from a corpus during search. The Corpus Workbench (CWB) architecture (Christ, 1994) with the Corpus Query Processor (CQP) as its core component is maybe the most widely used corpus query system as of today, serving as the backend for many corpus exploration websites. Having been under continuous maintenance to keep up with the demands of the new century (Evert and Hardie, 2011), it provides a solid set of simple yet expressive search features, such as regular expressions over tokens and token content, flexible structural boundaries, support for parallel corpora or the ability to enrich a corpus during ingest with external data that can then be used for querying, e.g. WordNet (Miller, 1995) categories. Emu (Cassidy and Harrington, 1996) was designed for speech corpora with multiple levels of segmentation. Primarily a hierarchical speech data management system, it also supports label- and position-based queries for collections of tokens. Similarly the MATE Workbench (Mengel, 1999; Mengel et al., 1999; Heid and Mengel, 1999; Isard et al., 2000) also targets combinations of text and speech data in the form of XML annotation files. It provides full boolean operations over hierarchical and time-based constraints in a logic-style query language, but no direct support for quantifiers. 3.2 Second Generation – The Rush for Rapid Feature Expansion At the dawn of the 21st century the second and larger wave of query systems emerged. Initially focused heavily on treebanks annotated for phrasebased syntax, a later trend shifted more towards supporting dependency syntax annotations, with an overall theme of increasing expressiveness with new approaches to query syntax and constraints. TIGERSearch (K¨onig and Lezius, 2000; Lezius, 2002) was among the first with its logic-based query language to target phrase-based treebanks conforming to the TIGER model (Brants et al., 2004). It inspired many of the later query approaches, but was quickly surpassed wrt expressiveness due to limited negation or quantification13. The ICE Corpus Utility Program (ICECUP)14 introduced a completely new direction of development. Wallis and Nelson (2000) emphasized the complexity required to transform a twodimensional tree description into a linear sequence of textual expression and made an argument for a graphical query approach. Their fuzzy tree fragments act as visual (under-)specification of the targeted phrase-based tree structures and are then matched against instances in a corpus. The appeals of this approach are diverse: It enables examplebased searching by allowing the user to start from an existing instance in the corpus, transform it into a query and then relax the constraints on that query to generalize it15. Not having to learn a formal query language and annotation schemes first, also lowers the barrier to entry for successful querying. As a dedicated treebank query tool TGrep2 (Rohde, 2001) offers a rich query syntax for phrasebased treebanks. Notable features are conjunction, disjunction and negation for relations, over 30 predefined basic link types and the ability for users to simplify complex queries by using macros. Usually corpus query tools depend on the target data already being annotated. Gsearch (Corley et al., 2001) however lets the user query unstructured text data by parsing it on the fly with a chart parser. Gsearch queries contain phrase-based constraints with limited boolean operators and the results are emitted in SGML. VIQTORYA16 (Steiner and Kallmeyer, 2002) is another tool to query phrase-based treebanks. Its query syntax is very similar to TIGERSearch17 and queries are translated for the RDBMS backend. Outside the domain of monolingual corpora ParaConc (Barlow, 2002) combines typical concordancer functionality such as surface search and 13The developers decided to forgo universal quantification due to computational cost and tractability (TIGERSearch Help, section 10.3) but also proposed an extension of the language with universal quantification and the implication operator. Marek et al. (2008) mention a solution based on set operations over multiple queries. This “allows to express queries which need a universal quantifier if expressed in a single query”. Unfortunately the referenced term paper is not available online. 14Designed for ICE-GB, the British component of the International Corpus of English (Nelson et al., 2002). 15Described by Wallis and Nelson (2000) as the ’get me something like that’ query method. 16Visual Query Tool for Syntactically Annotated Corpora 17Consisting of the same quantifier-free subset of first-order logic, but different precedence definition of internal nodes (cf. Steiner and Kallmeyer (2002) and Clematide (2015)). 6310 KWIC result view with regex and tag search and applies it to parallel corpora as targets. The CorpusSearch (Taylor, 2003; Randall, 2008) command line tool for phrase-based syntax expects tree search configurations provided via query files with a boolean query language over a variety of tree predicates and regular expressions. Limitations on disjunction and negation and lack of quantification18 make it slightly less expressive. With full first-order logic the Finite Structure Query (FSQ) tool by Kepser (2003) offers access to the complete TIGER model, including arbitrary secondary edges and support for regular expressions in a graphical user interface (GUI). It is however limited to rather small corpora due to poor scalability of the query evaluation process.19 To access multi-modal and highly crossannotated data in the NITE Object Model Library (Carletta et al., 2003), Evert and Voormann (2002) specified the NITE Query Language (NiteQL) based on MATE. Information from various segmentation levels can be extracted and combined in a logic-style language, including limited quantification. To honor the nature of multi-modal data they also propose a level of “fuzziness” for time operators with a configurable fuzziness interval. Based on the MdF (Monads-dot-Features) Database and its query language QL by Doedens (1994), Emdros (Petersen, 2004) implements a text database for annotated texts. Its query syntax uses bracket nesting to express hierarchical relations and it surpasses TIGERSearch in several aspects of expressiveness, e.g. existential negation20. While previously mentioned query systems were either freely available or bound to the licensing model of associated corpus resources (e.g. ICECUP), the popular Sketch Engine (Kilgarriff et al., 2004) commercialized21 corpus management and exploration in a web-based platform (Kilgarriff et al., 2014). Extending the CQP, its own query language CQL offers efficient access to corpora available on the platform (Jakub´ıˇcek et al., 2010). Around the same time ANNIS was published 18The way negation on arguments to search-function calls is handled allows to express certain quantified relations though. 19The author of FSQ discusses those limitations in (Kepser, 2004) and proposes a solution based on monadic second-order logic which was later implemented in MonaSearch. 20See Petersen (2005) for a brief comparison of the two systems including benchmarks on example queries. 21An open-source part under the label NoSketch Engine with the Manatee backend for indexing and search is also available at https://nlp.fi.muni.cz/trac/noske. (Dipper and G¨otze, 2005) and started a successful ecosystem with the corpus metamodel SALT, the converter framework PEPPER and ANNIS itself as search module with its query language AQL. AQL is a very expressive query language on top of the graph-based model of SALT and an extension of the TIGERSearch syntax. Notable improvements over TIGERSearch are the access to concurrent annotations for the same layers, a rich set of segment relations to choose from and the generalization of directed relations in a query to be applicable for any type of edge in the corpus graph (e.g. syntax, coreference or alignments in parallel corpora). Queries in ANNIS can be constructed textually or graphically in a browser environment. It has been under continuous development for about 15 years now (Zeldes et al., 2009; Krause and Zeldes, 2014), resulting in the richest collection of result visualizations available in any corpus query system. The Linguist’s Search Engine (LSE) (Resnik and Elkiss, 2005) applies the query-by-example concept in a browser-based setting: A user provides a natural language example containing the desired phenomenon and receives a parse tree usable for querying. Relaxation or removal of constraints from this tree then yields increasingly generalized instances from built-in or custom collections22. The emergence of XPath23 as a way of querying the tree-structure of various XML-based corpora offered new directions for corpus query languages. Bird et al. (2006) introduced LPath as an extension of XPath to overcome its limitations regarding the lack of expressible horizontal relations, a feature crucial for querying linguistic data. A later extension turned it into a first-order complete variant named LPath+ (Lai and Bird, 2005). Faulstich et al. (2006) also used an extension of XPath called DDDQuery to query complex annotation graphs of historical texts24. While using a RDBMS as backend, they do not directly translate queries into SQL. Instead user queries are first transformed into a first-order logic intermediate representation which in turn is translated into SQL. The Prague Dependency Treebank (PDT) (Hajiˇc et al., 2001; Hajiˇc, 2006) is a richly annotated corpus. Its unique characteristic is a tectogram22The “Getting Started Guide” (http://hdl.handle. net/1903/1324) for LSE mentions TGrep2 as the search component. In Resnik and Elkiss (2005) this information is missing and the screenshots do not show textual TGrep queries anymore, so the actual query evaluation backend is unknown. 23https://www.w3.org/TR/xpath 24http://www.deutschdiachrondigital.de/ 6311 matical layer which also includes annotations for coreference, deep word order, topic and focus. To provide users with adequate tools for access to this complexity, NetGraph (Ondruˇska et al., 2002; M´ırovsk´y, 2006) allows creation of tree queries for various layers both textually and graphically.25 Stockholm TreeAligner (Lundborg et al., 2007; Marek et al., 2008) continues the trend of extending the TIGERSearch language and applies it to parallel corpora. Its main improvement is the (re)introduction and implementation of universal quantification to overcome this central weakness. Classic query tools for text corpora such as CQP lack the ability to efficiently deal26 with common features of annotations for morphologically rich languages, such as positional tagsets or non-disambiguated annotation instances. POLIQARP27 (Przepi´orkowski et al., 2004; Janus and Przepi´orkowski, 2007) is an indexer and query tool loosely based on the CQP approach with a client-server architecture and a variety of available client implementations. Initially targeted towards rich word-level annotations, such as in the IPI PAN Corpus (Przepi´orkowski, 2004), it was later extended to also cover syntactic-semantic treebanks. What’s wrong with my NLP? by (Riedel, 2008) is primarily meant as a visualization tool with the ability to highlight differences between two concurrent dependency annotations (e.g. a gold standard and automatic predictions) with search options based on surface forms, tags and as a neat feature also including aforementioned diffs. Maryns and Kepser (2009a) extended the expressiveness of FSQ to monadic second-order logic in MonaSearch. It features a GUI for viewing textonly “flat” results and defining queries of enormous expressiveness. However, due to the limitations of the underlying MONA framework (requiring binary tree structures), the system can only target collections of proper trees. PML-TQ28 (Pajas and ˇStˇep´anek, 2009; ˇStˇep´anek and Pajas, 2010) is effectively the successor of NetGraph, being designed to handle 25Besides NetGraph the tree visualizer and editor software TrEd (Pajas, 2009) also can be used to search in PDT and other tree structures via user macros defined in Perl. It does however not offer a query language for non-programmers. 26This does not imply their expressiveness being insufficient for this task, but rather that such queries can become quite bloated and their construction cumbersome for users. 27POLyinterpretation Indexing Query And Retrieval Processor 28Prague Markup Language - Tree Query the rich multi-level annotations in the PDT. Its graphical client29 is directly integrated into the tree editor TrEd (Pajas, 2009) to support graphical query construction. Queries in PML-TQ are expressed as a mandatory selection part in bracket-syntax and an optional list of instructions to generate result reports. The latter of those two parts was groundbreaking in that it allows for an unprecedented freedom in selectively extracting information from any successful match during a search and creating various aggregations or statistics from it. Besides excellent result handling its query language is also quite powerful, including quantification and negation of sub-queries. 3.3 Third Generation – New Challenges During the last decade the speed at which new query tools have been developed or published slowed down considerably. At the same time continued growth in size of corpus resources rendered some of the earlier approaches inapplicable (cf. (Kepser, 2004) for a discussion on the limitations of FSQ), calling for innovative alternatives. The three most common themes of this era were (i) scalability and adaptability of search backends to keep up with the explosive growth of corpora, (ii) reducing the barrier to entry for a wide(r) range of potential users and (iii) working towards unification or standardization of query languages. GrETEL30 (Augustinus et al., 2012) is another implementation of the example-based search concept for the LASSY corpus (van Noord et al., 2013). Users provide sentences or example fragments and mark the areas of interest. Examples are then parsed, the subtrees for the specified part(s) of the input extracted and subsequently translated into XPath queries to run against the corpus in XML format. Further query options include the ability to specify whether or not pos, lemma or surface form of tokens in the subtree should be considered for the query. Since the user is effectively shielded from the tree representation and formal query formulation, GrETEL requires neither knowledge of an actual query language nor about the annotation scheme or underlying theories of the corpus. Fangorn (Ghodke and Bird, 2012) addresses the challenge of querying treebanks too large to be loaded into memory, a scenario prohibitive for 29The modular architecture supports multiple scenarios, such as a client-server setup with an RDBMS backend or an integrated index-less query evaluator in Perl for local data. 30Greedy Extraction of Trees for Empirical Linguistics 6312 query tools with custom evaluation engines. They use Apache LUCENE31 in a client-server setup to manage large numbers of phrase structure trees. Its query language follows the LPath scheme but lacks regular expressions support on label content. Unlike the majority of other systems in recent years, we developed ICARUS32 (G¨artner et al., 2013) as a standalone desktop application for visualization and example-based search33 with a custom query evaluation system and no indexing or dependency on another database technology. Initially designed for querying dependency treebanks it underwent multiple extensions to make it compatible with annotations for coreference (G¨artner et al., 2014) and prosody34 (G¨artner et al., 2015) and also to incorporate automatic error mining as a means of exploration (Thiele et al., 2014). Its bracket-style query language is similar to PML-TQ but lacks quantifiers and a dedicated section for result preparation instructions. While queries can be defined both textually or graphically, the preferred way is to use the graphical query editor that also provides contextual help for getting started easily. CLARIN Federated Content Search35 (CLARIN-FCS) is a successful example of unifying query access to multiple distributed corpus resources hosted by different parties and with diverse native query frontends. Its query language FCS-QL is heavily based on POLIQARP but also only meant to cover a small intersection of the expressiveness of common corpus query tools. On the level of standardization CQLF36 (Ba´nski et al., 2016) provides an initiative that aims at providing means for comparability and interoperability of corpus query languages. In its first phase37 CQLF-1 defines classes and features for the description of query languages for single-stream data. A unified serialization format for CQLF-1 is available with KoralQuery (Bingel and Diewald, 2015), a JSON-LD based and theory-neutral cor31https://lucene.apache.org/ 32Interactive Platform for Corpus Analysis and Research, University of Stuttgart 33An integrated interface for plugging in dependency parsers allows users to generate parses for example sentences that can then be converted into queries and relaxed iteratively. 34With various similarity measures usable for expressing query constraints based on the PaIntE model by M¨ohler (2001) 35https://www.clarin.eu/content/ content-search 36Corpus Query Lingua Franca. Part of ISO TC37 SC4 Working Group 6 (ISO 24623-1:2018). 37CQLF is an ongoing long-term effort, with CQLF-2 currently being worked on at the stage of a committee draft. pus query protocol. It serves as the internal query representation38 of KorAP39 (Ba´nski et al., 2014; Diewald et al., 2016), the designated successor of COSMAS II. While CLARIN-FCS multiplexes a query defined in a common (limited wrt expressiveness) query language to multiple query processors, KorAP lets the user choose up-front among several query languages40 that all can be processed by the system in a microservices architecture41. Similar to Fangorn, SETS42 (Luotolahti et al., 2015) is geared towards very large treebanks, this time targeting dependency syntax with a query language inspired by TRegex43. It is browser-based with a RDBMS backend and uses an elaborate query evaluation process: SETS generates and compiles optimized code for matching tokens for each query and only retrieves the minimal token sets from the database needed for evaluating a query. Multilingwis44 (Clematide et al., 2016) provides exploration in multiparallel corpora (Gra¨en et al., 2016). Focused on result presentation and reducing the required expert knowledge, it simplifies the process of finding translation variants. Other notable events in this time period include the modernization of CQP “for the new millennium” (Evert and Hardie, 2011) and the introduction of graphANNIS (Krause et al., 2016), a graph database backend for ANNIS3 as an alternative to the former RDBMS-based relANNIS. 4 Technological Alternatives Many of the systems we presented in Section 3 use various forms of database technology as their storage or evaluation backend. Typically every such database or information management system already ships with its dedicated query language, such as SQL for RDBMSs, SPARQL for the RDF format, XPath and XQuery for XML documents, CYPHER for Neo4j and other graph-based databases or Apache LUCENE with its own query dialect for accessing the text database. 38The high level of abstraction it implements and the verbosity required to express simple queries combined with JSON syntax results in limited human readability. 39Korpusanalyseplattform der n¨achsten Generation (“Corpus analysis platform of the next generation”) 40At the time of writing it supports the following query languages: Poliqarp, FCS-QL, AQL, CQP 1.2, COSMAS II 41KorAP builds on a variety of (storage) technologies, inluding several RDBMS variants, LUCENE and also the graph database Neo4j (http://neo4j.com/). 42Scalable and Efficient Tree Search 43A “Tree regular expression” language in TGrep2 style 44Multilingual Word Information System 6313 This does of course prompt the question on the necessity of developing dedicated corpus query languages when more often than not the actual query evaluation is just offloaded to an existing database technology. Already Jarke and Vassiliou (1985) mentioned a plethora of (technical) factors to be considered when deciding on a (database) query language. Mueller (2010) on the other hand takes the perspective of scholarly users, providing arguments especially targeting the aspects of usability from a humanistic point of view, describing the handling of search results as “Achilles heel of corpus query tools”. Having previously examined those factors in (G¨artner and Kuhn, 2018), we also agree on the continuing necessity of dedicated corpus query systems and query languages to bridge the gap between formal/technical expressiveness and the usability factors decisive for corpus users. Especially future directions as the ones we propose in Section 7 demand architectures that are more complex than the mere translations of data and queries. There have however also been approaches or use case analyses to completely store and query linguistic corpora with OWL (Burchardt et al., 2008), XQuery (Cassidy, 2002) or a via RDBMS (e.g. content of the DIRNDL corpus (Eckart et al., 2012) in its entirety has for a long time only been available through direct SQL queries), but historically speaking those cases generally represent a minority. 5 Related Work A lot of work has been invested already into laying the theoretical foundations for various aspects of and approaches to corpus querying, as well as into evaluating and comparing existing query systems. We distinguish between three types of contributions, namely (i) requirement analyses, (ii) evaluations of individual query languages or approaches and (iii) actual performance comparisons between multiple systems (feature-based or benchmarks). Several contributions listing requirements for corpus query systems have been previously mentioned in Section 4. In addition, M´ırovsk´y (2008) provides a list of required language features for querying PDT and Lai and Bird (2004) do so for treebanks in general, specifically related to navigation, closures over relations and going “beyond ordered trees” in order to query more complex structures. This list of functional requirements is later extended on in Lai and Bird (2010) with features such as temporal organization and non-navigational requirements. While not exclusive to corpus query systems, technical aspects related to feasibility (e.g. scalability or computational complexity) or longterm maintainability (e.g. interoperability and extensibility) are also frequently emphasized by Lai and Bird (2004), Kepser (2003) and others. Besides the usability-focused scholarly position of Mueller (2010) around aspects of answer time, maintenance cost and the management of search results, we previously discussed additional non-technical requirements related to the general readability or postprocessing capabilities of a query language and its learnability in G¨artner and Kuhn (2018), the latter being a crucial factor for achieving wide-spread use in humanistic fields. Formal evaluations of query languages are somewhat rare, e.g. (Lai and Bird, 2010) for LPath and LPath+, (Kepser, 2004) for MonaSearch or in part (Kepser, 2003) for FSQ. Instead the vast majority of evaluations use example queries of varying complexity to compare different query languages or systems. Notable early work on query complexity was done by Lai and Bird (2004), comparing several query languages45 based on a set of linguistic information needs of increasing complexity. The example queries they provide have proven to be a good baseline for comparing the capabilities of query languages and subsequently found their way into many later tool evaluations, such as (Petersen, 2006a) for Emdros or in Clematide (2015) when highlighting features of particular query languages. Yet another evaluation approach was used by Frick et al. (2012) when they applied the classes defined in CQLF-1 as evaluation criteria in the comparison of COSMAS II, POLIQARP and AQL. Clematide (2015) provides a very thorough reflection and categorization of the various families of corpus query languages: text corpus, treebank, path-based46 and logic-based. A point he makes that resonates well with other surveys is the importance of striking the right balance between usability and technical aspects in any practical situation. In some cases actual performance benchmarks have been published, such as testing Emdros with different RDBMS backends (Petersen, 2006b), 45TGrep2, TIGERSearch, Emu, CorpusSearch, NiteQL, LPath 46We argue for a more differentiated view on path-based query languages: While Clematide (2015) considers PML-TQ to be part of this family, we propose to move it together with ICARUS into a tree-based category of query languages, as their use of bracketed tree-expressions to describe structural relations represent a slightly different approach. 6314 comparisons between TIGERSearch and Emdros in Petersen (2005), MonaSearch and TIGERSearch in (Maryns and Kepser, 2009b) and Luotolahti et al. (2015) benchmarking SETS against ICARUS. However, due to the rapid change in technologies and the architectural differences between query systems, it tends to be very difficult to provide accurate and meaningful performance comparisons and readers are advised to carefully examine whether the reported use cases are applicable to their own. 6 Key Observations & Shortcomings In this section we intend to condense some of our observations after analyzing a large number of query systems. We focus on the following two aspects suitable for pointing out challenges (stemming from past shortcomings) and motivating directions in development of future corpus query systems, protocols or architectures. 6.1 Shifting Design Goals The different generations of corpus query systems listed in Section 3 are the results of design processes with generally very distinct goals. The first generation in Section 3.1 can be seen as the initial step to have some means of querying beyond the search functions of grep or any text editor. Subsequently, the second time period described in Section 3.2 represents a general exploration phase: Approaches in almost every direction were implemented, either as proof of concept for new query features or to address very specific linguistic theories or phenomena. Many of those implementations however were not scalable to the degree demanded by the rapid growth47 of corpora. As such the general trend in Section 3.3 was to overcome those limitations and provide scalable systems with also increased usability. At the same time the overall expressiveness of query languages provided took a step backwards. Especially concepts like closures over relations, (universal) quantification or existential negation often got rationalized in favor of performance in younger systems. Our vision of a hybrid architecture sketched in Section 7 is intended to overcome those limitations by utilizing and combining the different strengths of systems involved (such as the robust performance of indexing systems and the expressiveness and flexibility of custom query evaluation engines). 47Growth continually occurred both in size (number of primary units) and complexity (number of annotation layers). 6.2 Fragmentation & Limited Reusability With the enormous amounts of resources that have been invested into creating this zoo of corpus query languages and systems, it is surprising how little reuse and unification has occurred over the years. We attribute this trend to a variety of frequently recurring factors, particularly the following: • Due to the lack of standards regarding the categorization of expressiveness of query languages it has always been extremely difficult to determine whether an existing system could meat all the requirements a new project, user scenario or corpus resource posed, leading to redundancy.48 • The technological heterogeneity49 involved also represented a major issue that only slowly is being overcome by the emergence of standards for corpus storage and interchange formats or the shift to more modular architectures such as microservices or plugin-engines, making it much easier to adapt a system to new requirements.50 • Especially early query systems often emerged as an interface for a very particular corpus, a specific format or to support the phenomena a certain project was interested in. As such, the limited resources typically available for shortterm funded projects rarely allowed for extending previous monolithically designed work. Newly implemented (and often isolated) solutions focusing on a narrow selection of very specific query features or annotations were a common result. 7 The Final Frontier – An Outlook With several dozens of systems contributing their individual variations, the pool of available corpus query tools and languages has become quite large. Navigating this ocean in order to find the right tool for the job and then learn to use it can already be as much effort as manually investigating the data at hand. Fortunately the CQLF standardization initiative aims at providing developers with the means of locating their tools on a map of query features, so that prospective users may find them without an odyssey. While this effort is still in an early stage, we are looking forward to having catalogs available 48An aspect that CQLF is now addressing, removing the need of essentially reverse engineering a tool or studying its source code, as time constraints together with the lack of standardization often went along with poor documentation. 49Ranging from platform/language lock-ins to format/storage dependencies, often in a monolithic composition. 50Such as new query features, formats, storage/database solutions, standalone apps or various client-server architectures. 6315 in the not too distant future, allowing us to browse for query languages based on our individual information needs. However, many questions regarding the future of corpus querying still remain, two of which we consider of particular importance and will discuss in the following sections. 7.1 One Language to Query Them All? Today we have a cluttered buffet of corpus query languages to pick from depending on our information needs. Interestingly they all share the pros and cons of being designed as formal languages with the goal of taciturnity, meaning that for the untrained eye they usually represent just a weird salad of letters and special characters .51 This is particularly noteworthy, as all modern corpus query tools feature a rich GUI and could easily employ a more verbose query language while at the same time shield users from the time overhead when creating queries by clever auto-completion or recommendation functions. Likewise, today’s corpus queries are not selfcontained to the level of for instance SQL queries, which are composed of dedicated parts for scope selection, actual constraints and result preparation. Usually only the constraint part is present in corpus query languages, with only a few exceptions 52, leaving additional configurations (result size limit, search direction, case sensitivity) exclusively to external components, such as the GUI, hampering the reproducibility of search results severely. A fully self-contained and human-readable query protocol that can embed any existing query language and augment it with (boilerplate) statements to bind the query content to actual corpora and annotation layers, provide information about the query dialect and its version and store configuration and result preparation instructions, would go a long way towards unification and potential interoperability of corpus query systems. 7.2 Towards a Hybrid Architecture? The typical architecture of corpus query systems today is a monolithic one and contains from bottom to top (i) a backend storage or custom data model, 51Kaufmann and Bernstein (2010) investigated the usability of natural language queries for interfaces to the semantic web with positive results. It would be interesting to see similar studies on corpus query interfaces. 52cf. PML-TQ for exemplary post-processing instructions, allowing to treat results as tabular data and to perform various transformation and aggregation operations on it, including textual reports. (ii) a custom query evaluator or query interface to said backend and (iii) a query parser or translator to process the raw user query. Choices in technology or algorithms for (i) through (iii) definitively dictate the basic nature and structure of the information that can be queried. They usually make it very difficult, if not impossible, to implement changes or extensions retrospectively or from the outside. A strong dependency on indexing to access large corpora also presupposes a priori knowledge of what information is meant to be searchable, frequently confining corpus query tools to the role of being mere finding aids within a research process. We would like to see them become true enablers instead, allowing queries to go far beyond of what a corpus has to offer with its bare annotations alone and for example include the following extensions to create more informed search solutions: • Use knowledge bases and similar external resources to allow more generalized queries, e.g. “find verbal constructions containing a preposition in combination with some sort of furniture”. • Add (semantic) similarity measures (e.g. word embeddings) and other approaches for increased fuzziness to improve example-based search. • Offer true scripting support for users to extent or customize the ability provided by a system. While this might affect performance in unpredictable and detrimental ways, raw (distributed) computing power and clever use of pre-filtering can offset the impacts on performance. Naturally all of these proposed features (and especially the last one) require a drastically different and quite heterogeneous architecture. Taking the microservices approach of KorAP as an example, it is easy to imagine a hierarchically organized architecture of query translation and evaluation services working together (by partially answering queries, filtering the results or otherwise post-process them) to provide the optimal combination of freedom in expressiveness and performance guarantees. Space does not permit we provide a detailed description of such a hybrid approach. Instead we refer to (G¨artner, to appear) for an overview of our ongoing efforts to design and implement a hybrid corpus query architecture and associated query protocol. Twenty years ago this might have seemed utterly unrealistic, but advances in information management systems and distributed computing certainly put this vision within technical reach. 6316 References Amaryllis. 2001. Amaryllis corpus - evaluation package. ELRA. ISLRN: 786-395-313-491-8. Susan Armstrong-Warwick, Henry S. Thompson, David McKelvie, and Dominique Petitpierre. 1994. Data in your language: The ECI Multilingual Corpus 1. In Proceedings of the International Workshop on Sharable Natural Language Resources, Nara, Japan. Liesbeth Augustinus, Vincent Vandeghinste, and Frank Van Eynde. 2012. Example-based treebank querying. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012), pages 3161–3167, Istanbul, Turkey. European Language Resources Association (ELRA). ACL Anthology Identifier: L12-1442. Piotr Ba´nski, Joachim Bingel, Nils Diewald, Elena Frick, Michael Hanl, Marc Kupietz, Piotr Pe¸zik, Carsten Schnober, and Andreas Witt. 2014. KorAP: the new corpus analysis platform at IDS mannheim. Human language technology challenges for computer science and linguistics. 6th language & technology conference december 7-9, 2013, Pozna´n, Poland, pages 586 – 587, Pozna´n. Uniwersytet im. Adama Mickiewicza w Poznaniu. Piotr Ba´nski, Elena Frick, and Andreas Witt. 2016. Corpus Query Lingua Franca (CQLF). In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France. European Language Resources Association (ELRA). Michael Barlow. 2002. ParaConc: Concordance software for multilingual parallel corpora. In Proceedings of the Third International Conference on Language Resources and Evaluation. Workshop on Language Resources in Translation Work and Research., Las Palmas, Canary Islands - Spain, pages 20–24. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed webcrawled corpora. Language Resources and Evaluation, 43(3):209–226. Eckhard Bick. 2018. Arbobanko - A treebank for Esperanto. In Proceedings of CICLing 2018 - 19th International Conference on Computational Linguistics, Germany. Springer. Joachim Bingel and Nils Diewald. 2015. KoralQuery – A General Corpus Query Protocol, volume 111, pages 1–5. Link¨oping University Electronic Press. Steven Bird, Yi Chen, Susan B Davidson, Haejoong Lee, and Yifeng Zheng. 2006. Designing and evaluating an XPath dialect for linguistic queries. In 22nd International Conference on Data Engineering (ICDE’06), pages 52–52. IEEE. Franck Bodmer. 2005. COSMAS II - Recherchieren in den Korpora des IDS. Sprachreport : Informationen und Meinungen zur deutschen Sprache, 21(3):2 – 5. Sabine Brants, Stefanie Dipper, Peter Eisenberg, Silvia Hansen-Schirra, Esther K¨onig, Wolfgang Lezius, Christian Rohrer, George Smith, and Hans Uszkoreit. 2004. TIGER: Linguistic interpretation of a German corpus. Research on Language and Computation, 2(4):597–620. Aljoscha Burchardt, Sebastian Pad´o, Dennis Spohr, Anette Frank, and Ulrich Heid. 2008. Constructing integrated corpus and lexicon models for multi-layer annotations in OWL DL. Linguistic Issues in Language Technology, 1:1–33. Jean Carletta, Jonathan Kilgour, Tim O’Donnell, Stefan Evert, and Holger Voormann. 2003. The NITE object model library for handling structured linguistic annotation on multimodal data sets. In In Proceedings of the EACL Workshop on Language Technology and the Semantic Web (3rd Workshop on NLP and XML, NLPXML-2003). Steve Cassidy. 2002. XQuery as an annotation query language: a use case analysis. In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02), Las Palmas, Canary Islands - Spain. European Language Resources Association (ELRA). Steve Cassidy and Jonathan Harrington. 1996. EMU: an enhanced hierarchical speech data management system. In Proceedings of the Sixth Australian International Conference on Speech Science and Technology, pages 361–366. Yun-Chuang Chiao, Olivier Kraif, Dominique Laurent, Thi Minh Huyen Nguyen, Nasredine Semmar, Franc¸ois Stuck, Jean V´eronis, and Wajdi Zaghouani. 2006. Evaluation of multilingual text alignment systems: the ARCADE II project. In Proceedings of the fifth International Conference on Language Resources and Evaluation (LREC 2006), pages 1975– 1979, Genoa, Italy. Oliver Christ. 1994. A modular and flexible architecture for an integrated corpus query system. In Proceedings of COMPLEX’94: 3rd Conference on Computational Lexicography and Text Research, pages 23–32, Budapest. Simon Clematide. 2015. Reflections and a proposal for a query and reporting language for richly annotated multiparallel corpora. In Proceedings of the Workshop on Innovative Corpus Query and Visualization Tools at NODALIDA 2015, May 11-13, 2015, Vilnius, Lithuania, 111, pages 6–16. Link¨oping University Electronic Press, Link¨opings universitet. Simon Clematide, Johannes Gra¨en, and Martin Volk. 2016. Multilingwis – a multilingual search tool for multi-word units in multiparallel corpora. In Gloria Corpas Pastor, editor, Computerised and Corpusbased Approaches to Phraseology: Monolingual 6317 and Multilingual Perspectives/Fraseolog´ıa computacional y basada en corpus: perspectivas monoling¨ues y multiling¨ues, page n/a. Tradulex, Geneva. Martin Cmejrek, Jan Cur´ın, Jan Hajic, and Jir´ı Havelka. 2005. Prague Czech-English dependency treebank resource for structure-based MT. In EAMT 2005 Conference Proceedings, pages 73–78. Steffan Corley, Martin Corley, Frank Keller, Matthew W. Crocker, and Shari Trewin. 2001. Finding syntactic structure in unparsed corpora the Gsearch corpus query system. Computers and the Humanities, 35(2):81–94. Mark Davies. 2005. The advantage of using relational databases for large corpora: Speed, advanced queries, and unlimited annotation. International Journal of Corpus Linguistics, 10(3):307–334. Nils Diewald, Michael Hanl, Eliza Margaretha, Joachim Bingel, Marc Kupietz, Piotr Ba´nski, and Andreas Witt. 2016. KorAP architecture – diving in the deep sea of corpus data. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 3586– 3591, Portoroˇz, Slovenia. European Language Resources Association (ELRA). Stefanie Dipper and Michael G¨otze. 2005. Accessing heterogeneous linguistic data – generic XML-based representation and flexible visualization. In Proceedings of the 2nd Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics, pages 206–210, Poznan, Poland. Christ-Jan Doedens. 1994. Text Databases: One Database Model and Several Retrieval Languages, volume 14 of Language and Computers. Brill Rodopi. Kerstin Eckart, Arndt Riester, and Katrin Schweitzer. 2012. A discourse information radio news database for linguistic analysis. In Christian Chiarcos, Sebastian Nordhoff, and Sebastian Hellmann, editors, Linked Data in Linguistics. Representing and Connecting Language Data and Language Metadata, pages 65–75. Springer, Heidelberg. Stefan Evert and Andrew Hardie. 2011. Twenty-first century Corpus Workbench: Updating a query architecture for the new millennium. In Proceedings of the Corpus Linguistics 2011 conference, Birmingham. Stefan Evert and Holger Voormann. 2002. The NITE query language. Karlheinz Everts. 2000. Das Karl-May-Korpus – Ein linguistisch annotiertes Korpus der Werke des Autors Karl May und einiger seiner Zeitgenossen. Aufbau und Analysen. Online. Version 5. Lukas C. Faulstich, Ulf Leser, and Thorsten Vitt. 2006. Implementing a linguistic query language for historic texts. In Current Trends in Database Technology – EDBT 2006, pages 601–612, Berlin, Heidelberg. Springer Berlin Heidelberg. Marcello Federico, Dimitri Giordani, and Paolo Coletti. 2000. Development and evaluation of an Italian broadcast news corpus. In Proceedings of the Second International Conference on Language Resources and Evaluation (LREC 2000), Athens, Greece. European Language Resources Association (ELRA). Elena Frick, Carsten Schnober, and Piotr Ba´nski. 2012. Evaluating query languages for a corpus processing system. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12), Istanbul, Turkey. European Language Resources Association (ELRA). Markus G¨artner. to appear. The corpus query middleware of tomorrow – A proposal for a hybrid corpus query architecture. In Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-8). Markus G¨artner, Anders Bj¨orkelund, Gregor Thiele, Wolfgang Seeker, and Jonas Kuhn. 2014. Visualization, search, and error analysis for coreference annotations. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 7–12, Baltimore, Maryland. Association for Computational Linguistics. Markus G¨artner and Jonas Kuhn. 2018. Making corpus querying ready for the future: Challenges and concepts. In Proceedings of the 27th International Conference on Computational Linguistics, KONVENS 2018, Wien, ¨Osterreich. Markus G¨artner, Katrin Schweitzer, Kerstin Eckart, and Jonas Kuhn. 2015. Multi-modal visualization and search for text and prosody annotations. In Proceedings of ACL-IJCNLP 2015 System Demonstrations, pages 25–30, Beijing, China. Association for Computational Linguistics and The Asian Federation of Natural Language Processing. Markus G¨artner, Gregor Thiele, Wolfgang Seeker, Anders Bj¨orkelund, and Jonas Kuhn. 2013. ICARUS – an extensible graphical search tool for dependency treebanks. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60, Sofia, Bulgaria. Association for Computational Linguistics. Sumukh Ghodke and Steven Bird. 2012. Fangorn: A system for querying very large treebanks. In COLING 2012: Demonstration Papers, pages 175–182, Mumbai, India. Johannes Gra¨en, Simon Clematide, and Martin Volk. 2016. Efficient exploration of translation variants in large multiparallel corpora using a relational 6318 database. In 4th Workshop on the Challenges in the Management of Large Corpora, pages 20–23. s.n. David Graff. 1995. European Language Newspaper Text LDC95T11. Web Download, Philadelphia: Linguistic Data Consortium. David Graff and Christopher Cieri. 2003. English Gigaword LDC2003T05. Web Download, Philadelphia: Linguistic Data Consortium. David Graff and Rebecca Finch. 1994. Multilingual text resources at the Linguistic Data Consortium. In Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 811, 1994. Jan Hajiˇc. 2006. Complex corpus annotation: The Prague Dependency Treebank. In Insight into Slovak and Czech Corpus Linguistic, pages 54–73, Bratislava, Slovakia. Jazykovedn´y ´ustav ˇL. ˇSt´ura, SAV. Jan Hajiˇc, Barbora Vidov´a Hladk´a, and Petr Pajas. 2001. The Prague Dependency Treebank: Annotation structure and support. In Proceedings of the IRCS Workshop on Linguistic Databases, pages 105– 114. University of Pennsylvania, Philadelphia, USA. Hans Van Halteren and Theo Van Den Heuvel. 1990. Linguistic Exploitation of Syntactic Databases: The Use of the Nijmegen LDB Program (LANGUAGE AND COMPUTERS). Brill Rodopi. Olivier Hamon, Andrei Popescu-Belis, Khalid Choukri, Marianne Dabbadie, Anthony Hartley, Widad Mustafa El Hadi, Martin Rajman, and Isma¨ıl Timimi. 2006. CESTA: First conclusions of the technolangue MT evaluation campaign. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06), Genoa, Italy. European Language Resources Association (ELRA). Ulrich Heid and Andreas Mengel. 1999. Query language for research in phonetics. In International Congress of Phonetic Sciences (ICPhS 99), pages 1225–1228, San Francisco. Nancy Ide, Collin Baker, Christiane Fellbaum, and Rebecca Passonneau. 2010. The Manually Annotated Sub-Corpus: A community resource for and by the people. In Proceedings of the ACL 2010 Conference Short Papers, pages 68–73, Uppsala, Sweden. Association for Computational Linguistics. Radu Ion, Elena Irimia, Dan S¸tef˘anescu, and Dan Tufis¸. 2012. ROMBAC: The Romanian Balanced Annotated Corpus. In Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC’12), Istanbul, Turkey. European Language Resources Association (ELRA). Amy Isard, David McKelvie, Andreas Mengel, and Morten Baun Møller. 2000. The Mate Workbench - a tool for annotating XML corpora. In Computer-Assisted Information Retrieval (Recherche d’Information et ses Applications) RIAO 2000, 6th International Conference, College de France, France, April 12-14, 2000. Proceedings, pages 411–425. Miloˇs Jakub´ıˇcek, Adam Kilgarriff, Diana McCarthy, and Pavel Rychl´y. 2010. Fast syntactic searching in very large corpora for many languages. In Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation, pages 741–747, Tohoku University, Sendai, Japan. Institute of Digital Enhancement of Cognitive Processing, Waseda University. Daniel Janus and Adam Przepi´orkowski. 2007. Poliqarp: An open source corpus indexer and search engine with syntactic extensions. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL ’07, pages 85–88, Stroudsburg, PA, USA. Association for Computational Linguistics. Matthias Jarke and Yannis Vassiliou. 1985. A framework for choosing a database query language. ACM Comput. Surv., 17(3):313–340. Esther Kaufmann and Abraham Bernstein. 2010. Evaluating the usability of natural language query languages and interfaces to semantic web knowledge bases. Journal of Web Semantics, 8(4):377–393. Stephan Kepser. 2003. Finite structure query: A tool for querying syntactically annotated corpora. In Proceedings of the Tenth Conference on European Chapter of the Association for Computational Linguistics - Volume 1, EACL ’03, pages 179–186, Stroudsburg, PA, USA. Association for Computational Linguistics. Stephan Kepser. 2004. Querying linguistic treebanks with monadic second-order logic in linear time. Journal of Logic, Language and Information, 13(4):457–470. Adam Kilgarriff, V´ıt Baisa, Jan Buˇsta, Miloˇs Jakub´ıˇcek, Vojtˇech Kov´aˇr, Jan Michelfeit, Pavel Rychl´y, and V´ıt Suchomel. 2014. The Sketch Engine: ten years on. Lexicography, pages 7–36. Adam Kilgarriff, Pavel Rychl´y, Pavel Smrˇz, and David Tugwell. 2004. The Sketch Engine. Information Technology. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of the tenth Machine Translation Summit, pages 79–86, Phuket, Thailand. Esther K¨onig and Wolfgang Lezius. 2000. A description language for syntactically annotated corpora. In Proceedings of the 18th Conference on Computational Linguistics - Volume 2, COLING ’00, pages 1056–1060, Stroudsburg, PA, USA. Association for Computational Linguistics. 6319 Thomas Krause, Ulf Leser, and Anke L¨udeling. 2016. graphANNIS: A fast query engine for deeply annotated linguistic corpora. JLCL, 31(1):iii–25. Thomas Krause and Amir Zeldes. 2014. ANNIS3: A new architecture for generic corpus query and visualization. Digital Scholarship in the Humanities. Catherine Lai and Steven Bird. 2004. Querying and updating treebanks: A critical survey and requirements analysis. In In Proceedings of the Australasian Language Technology Workshop, pages 139–146. Catherine Lai and Steven Bird. 2005. LPath+: A firstorder complete language for linguistic tree query. In Proceedings of the 19th Pacific Asia Conference on Language, Information and Computation, pages 1– 12, Taipei, Taiwan, R.O.C. Institute of Linguistics, Academia Sinica. Catherine Lai and Steven Bird. 2010. Querying linguistic trees. J. of Logic, Lang. and Inf., 19(1):53–73. Wolfgang Lezius. 2002. Ein Suchwerkzeug f¨ur syntaktisch annotierte Textkorpora. Ph.D. thesis, IMS, University of Stuttgart. Arbeitspapiere des Instituts f¨ur Maschinelle Sprachverarbeitung (AIMS), volume 8, number 4. Mark Liberman. 1989. Text on tap: The ACL/DCI. In Speech and Natural Language: Proceedings of a Workshop Held at Cape Cod, Massachusetts, October 15-18, 1989. Joakim Lundborg, Torsten Marek, and Martin Volk. 2007. Using the Stockholm TreeAligner. In 6th Workshop on Treebanks and Linguistic Theories. Juhani Luotolahti, Jenna Kanerva, Sampo Pyysalo, and Filip Ginter. 2015. SETS: Scalable and efficient tree search in dependency graphs. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 51–55, Denver, Colorado. Association for Computational Linguistics. Brian MacWhinney, Steven Bird, Christopher Cieri, and Craig Martell. 2004. Talkbank: Building an open unified multimodal database of communicative interaction. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04), Lisbon, Portugal. European Language Resources Association (ELRA). Torsten Marek, Joakim Lundborg, and Martin Volk. 2008. Extending the TIGER query language with universal quantification. In KONVENS 2008: 9. Konferenz zur Verarbeitung nat¨urlicher Sprache, pages 5–17. Hendrik Maryns and Stephan Kepser. 2009a. MonaSearch – a tool for querying linguistic treebanks. In Treebanks and Linguistic Theories 2009, pages 29–40, Groningen. Hendrik Maryns and Stephan Kepser. 2009b. Monasearch: Querying linguistic treebanks with monadic second-order logic. In The 7th International Workshop on Treebanks and Linguistic Theories. Andreas Mengel. 1999. MATE deliverable D3. 1– specification of coding workbench: 3.8 improved query language (Q4M). Technical report, Technical report, Institut f¨ur Maschinelle Sprachverarbeitung, Stuttgart, 18 .... Andreas Mengel, Ulrich Heid, Arne Fitschen, and Stefan Evert. 1999. Specification of coding workbench: Improved query language (q4m). Technical report, Technical Report MATE Deliverable. George A. Miller. 1995. WordNet: A Lexical Database for English. Commun. ACM, 38(11):39–41. Jiˇr´ı M´ırovsk´y. 2006. Netgraph: A tool for searching in Prague Dependency Treebank 2.0. In Proceedings of TLT 2006, pages 211–222, Praha, Czechia. ´UFAL MFF UK. Jiˇr´ı M´ırovsk´y. 2008. PDT 2.0 requirements on a query language. In Proceedings of ACL-08: HLT, pages 37–45, Columbus, Ohio. Association for Computational Linguistics. Gregor M¨ohler. 2001. Improvements of the PaIntE model for F0 parametrization. Technical report, Institute of Natural Language Processing, University of Stuttgart. Draft version. Martin Mueller. 2010. Towards a digital carrel: A report about corpus query tools. Gerald Nelson, Sean Wallis, and Bas Aarts. 2002. Exploring natural language: working with the British component of the International Corpus of English. John Benjamins Publishing. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajiˇc, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 1659–1666, Portoroˇz, Slovenia. European Language Resources Association (ELRA). Gertjan van Noord, Gosse Bouma, Frank Van Eynde, Dani¨el de Kok, Jelmer van der Linde, Ineke Schuurman, Erik Tjong Kim Sang, and Vincent Vandeghinste. 2013. Large scale syntactic annotation of written Dutch: Lassy. In Peter Spyns and Jan Odijk, editors, Essential Speech and Language Technology for Dutch: Results by the STEVIN programme, Theory and Applications of Natural Language Processing, pages 147–164. Springer, Berlin, Heidelberg. 6320 Roman Ondruˇska, Jiˇr´ı M´ırovsk´y, and Daniel Pr˚uˇsa. 2002. Searching through Prague Dependency Treebank-conception and architecture. In Proceedings of The First Workshop on Treebanks and Linguistic Theories, pages 114–122, Sofia, Bulgaria and Tuebingen, Germany. LML, Bulgarian Academy of Sciences and SfS, Tuebingen University. Petr Pajas. 2009. TrEd. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles University. Petr Pajas and Jan ˇStˇep´anek. 2005. A generic XMLbased format for structured linguistic annotation and its application to Prague DependencyTreebank 2.0. Technical Report 29, ´UFAL MFF UK, Prague, Czech Republic. Petr Pajas and Jan ˇStˇep´anek. 2009. System for querying syntactically annotated corpora. In ACLIJCNLP: Software Demonstrations, pages 33–36, Suntec, Singapore. Ulrik Petersen. 2004. Emdros: A text database engine for analyzed or annotated text. In Proceedings of the 20th International Conference on Computational Linguistics, COLING ’04, Stroudsburg, PA, USA. Association for Computational Linguistics. Ulrik Petersen. 2005. Evaluating corpus query systems on functionality and speed: TIGERSearch and Emdros. In International Conference Recent Advances in Natural Language Processing. Proceedings, pages 387–391. Incoma, Ltd. Ulrik Petersen. 2006a. Principles, implementation strategies, and evaluation of a corpus query system. In Finite-State Methods and Natural Language Processing, pages 215–226, Berlin, Heidelberg. Springer Berlin Heidelberg. Ulrik Petersen. 2006b. Querying both parallel and treebank corpora: Evaluation of a corpus query system. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06), Genoa, Italy. European Language Resources Association (ELRA). Florian Petran, Marcel Bollmann, Stefanie Dipper, and Thomas Klein. 2016. ReM: A reference corpus of Middle High German - corpus compilation, annotation, and access. Journal for Language Technology and Computational Linguistics, 31(2):1–15. Rashmi Prasad, Bonnie Webber, Alan Lee, and Aravind Joshi. 2019. Penn Discourse Treebank Version 3.0 LDC2019T05. Web Download, Philadelphia: Linguistic Data Consortium. Adam Przepi´orkowski. 2004. The IPI PAN Corpus: Preliminary version. Institute of Computer Science, Polish Academy of Sciences, Warsaw. Adam Przepi´orkowski, Zygmunt Krynicki, Łukasz Debowski, Marcin Woli´nski, Daniel Janus, and Piotr Ba´nski. 2004. A search tool for corpora with positional tagsets and ambiguities. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04), pages 1235– 1238. Beth Randall. 2008. CorpusSearch 2 users guide. Philip Resnik and Aaron Elkiss. 2005. The Linguist’s Search Engine: An overview. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pages 33–36, Ann Arbor, Michigan. Association for Computational Linguistics. Sebastian Riedel. 2008. What’s Wrong With My NLP? http://code.google.com/p/whatswrong/. Douglas L.T. Rohde. 2001. TGrep2 user manual. http://tedlab.mit.edu/ dr/Tgrep2/. Roland Sch¨afer. 2015. Processing and querying large web corpora with the COW14 architecture. In Proceedings of Challenges in the Management of Large Corpora 3 (CMLC-3), Lancaster. UCREL, IDS. Roland Sch¨afer and Felix Bildhauer. 2012. Building large corpora from the web using a new efficient tool chain. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC’12), Istanbul. European Language Resources Association (ELRA). Katrin Schweitzer, Kerstin Eckart, Markus G¨artner, Agnieszka Falenska, Arndt Riester, Ina R¨osiger, Antje Schweitzer, Sabrina Stehwien, and Jonas Kuhn. 2018. German radio interviews: The GRAIN release of the SFB732 Silver Standard Collection. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Ilona Steiner and Laura Kallmeyer. 2002. VIQTORYA – a visual query tool for syntactically annotated corpora. In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02), Las Palmas, Canary Islands - Spain. European Language Resources Association (ELRA). Jan ˇStˇep´anek and Petr Pajas. 2010. Querying diverse treebanks in a uniform way. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC’10), Valletta, Malta. European Language Resources Association (ELRA). Ann Taylor. 2003. CorpusSearch version 1.1 - reference manual. Gregor Thiele, Wolfgang Seeker, Markus G¨artner, Anders Bj¨orkelund, and Jonas Kuhn. 2014. A graphical interface for automatic error mining in corpora. In Proceedings of the Demonstrations at the 14th 6321 Conference of the European Chapter of the Association for Computational Linguistics, pages 57– 60, Gothenburg, Sweden. Association for Computational Linguistics. Sean Wallis and Gerald Nelson. 2000. Exploiting fuzzy tree fragment queries in the investigation of parsed corpora. Literary and linguistic computing, 15(3):339–362. Amir Zeldes, Anke L¨udeling, Julia Ritz, and Christian Chiarcos. 2009. ANNIS: a search tool for multilayer annotated corpora. In Proceedings of Corpus Linguistics.
2020
562
This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.563. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6322–6333 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6322 A Contextual Hierarchical Attention Network with Adaptive Objective for Dialogue State Tracking Yong Shan12†, Zekang Li12, Jinchao Zhang3, Fandong Meng3, Yang Feng12∗ Cheng Niu3, Jie Zhou3 1 Key Laboratory of Intelligent Information Processing Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS) 2 University of Chinese Academy of Sciences 3 Pattern Recognition Center, WeChat AI, Tencent Inc, China {shanyong18s,lizekang19g,fengyang}@ict.ac.cn {dayerzhang,fandongmeng,chengniu,withtomzhou}@tencent.com Abstract Recent studies in dialogue state tracking (DST) leverage historical information to determine states which are generally represented as slot-value pairs. However, most of them have limitations to efficiently exploit relevant context due to the lack of a powerful mechanism for modeling interactions between the slot and the dialogue history. Besides, existing methods usually ignore the slot imbalance problem and treat all slots indiscriminately, which limits the learning of hard slots and eventually hurts overall performance. In this paper, we propose to enhance the DST through employing a contextual hierarchical attention network to not only discern relevant information at both word level and turn level but also learn contextual representations. We further propose an adaptive objective to alleviate the slot imbalance problem by dynamically adjust weights of different slots during training. Experimental results show that our approach reaches 52.68% and 58.55% joint accuracy on MultiWOZ 2.0 and MultiWOZ 2.1 datasets respectively and achieves new stateof-the-art performance with considerable improvements (+1.24% and +5.98%). 1 1 Introduction Recently, task-oriented dialogue systems have attracted increasing attention in both industry and academia due to their broad application for helping users accomplish tasks through spoken interactions (Young, 2002; Young et al., 2013; Gao et al., 2019a). Dialogue state tracking (DST) is an essential part of dialogue management in task-oriented dialogue systems. Given current utterances and dialogue history, DST aims to determine the set of †Joint work with Pattern Recognition Center, WeChat AI, Tencent Inc. ∗Yang Feng is the corresponding author. 1Code is available at https://github.com/ictnlp/CHAN-DST User: Hello, I’m looking for a resraurant with fair prices. State: price range=moderate Sys: OK. There are Golden Wok Chinese restaurant and Nirala which serves Indian food, which one do you like? User: Are they both have a reasonable price ? State: price range=moderate Sys: Of course. User: Please tell me the address of Golden Wok. State: price range=moderate; food=chinese Table 1: An example dialogue. At the last turn, it is necessary to capture relevant information in dialogue history to correctly predict the value of slot “food”, which is underlined. “User” and “Sys” represent user utterance and system response respectively, and the italic text means dialogue states. goals that a user tries to inform at each turn which are represented as slot-value pairs (Williams et al., 2013; Henderson et al., 2014a). As Table 1 shows, the dialogue state is usually dependent on relevant context in the dialogue history, which is proven in previous studies (Sharma et al., 2019; Wu et al., 2019). However, traditional DST models usually determine dialogue states by considering only utterances at current turn (Henderson et al., 2014b; Mrkˇsi´c et al., 2017; Zhong et al., 2018; Chao and Lane, 2019) which neglects the use of dialogue history. Recent researches attempt to address this problem through introducing historical dialogue information into the prediction of slot-value pairs. Most of them leverage a naive attention between slots and concatenated historical utterances (Wu et al., 2019; Zhou and Small, 2019; Gao et al., 2019b; Zhang et al., 2019; Le et al., 2020a,b) or only utilize partial history (Ren et al., 2019; Kim et al., 2019; Sharma et al., 2019) or lack direct interactions between slots and history (Ren et al., 2018; Lee et al., 2019; Goel et al., 2019). Briefly, these methods are deficient in exploiting relevant context from dialogue history. RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.563. 6323 Furthermore, there are differences in the frequency of different slots and different slot-value pairs. For example, in MultiWOZ 2.0 train set, there are 15384 samples related to the slot “trainday” while 5843 for the slot “attraction-name”; the slot-value pair (attraction-area, center) occurs 5432 times and (taxi-departure, royal spice) occurs only 9 times; etc. We refer to this problem as “slot imbalance”, which makes the learning difficulties of different slots varies (Refer to Appendix for details). However, existing approaches usually ignore the slot imbalance problem and treat all slots indiscriminately, which limits the learning of those hard slots and eventually damages the overall performance. To address the two aforementioned problems, we propose an effective model equipped with a contextual hierarchical attention network (CHAN) to fully exploit relevant context from dialogue history, and an adaptive objective to alleviate the slot imbalance problem. In CHAN, the slot firstly retrieves word-level relevant information from utterances at each turn. Then, these word-level relevant information will be encoded into contextual representations by rich interactions. Finally, the slot aggregates all contextual representations into turnlevel relevant information and then we combine it with word-level relevant information to obtain the outputs. To further enhance the ability to exploit relevant context, we employ a state transition prediction task to assist DST learning. For the slot imbalance problem, our adaptive objective can dynamically evaluate the difficulties in an accuracysensitive manner and then adaptively adjust the learning weights for different slots. Thus, it can balance the learning of all slots as far as possible. We evaluate the effectiveness of our model on MultiWOZ 2.0 and MultiWOZ 2.1 datasets. Experimental results show that our model reaches 52.68% and 58.55% joint accuracy, outperforming previous state-of-the-art by +1.24% and +5.98%, respectively. The ablation study also demonstrates each module’s effectiveness in our model. Our contributions are as follows: • We propose an effective contextual hierarchical attention network to fully exploit relevant context from dialogue history and employ a state transition prediction task to further enhance it. • We design an adaptive objective to address the slot imbalance problem by dynamically adjusting the weight of each slot. To the best of our knowledge, our method is the first to address the slot imbalance problem in DST. • Experimental results show that our model achieves state-of-the-art performance with significant improvements over all previous models. 2 Approach As shown in Figure 1, the proposed model consists of three components: 1) the contextual hierarchical attention network (CHAN); 2) the state transition prediction module; 3) the adaptive objective. We share all the model parameters for each slot to keep our model universal for all slots. 2.1 Problem Statement Given a dialogue X = {(U1, R1), ..., (UT , RT )} of T turns where Ut represents user utterance and Rt represents system response of turn t, we define the dialogue state at each turn t as Bt = {(s, vt), s ∈S} where S is a set of slots and vt is the corresponding value of the slot s. Following Lee et al. (2019), we use the term “slot” to refer to the concatenation of a domain name and a slot name in order to represent both domain and slot information. For example, “restaurant-food”. Similar to (Ren et al., 2018; Lee et al., 2019), we decompose the dialogue state tracking to a multilabel classification problem where we score each value with slot-related features in a non-parametric way and then choose the best candidate. We also add a literally “none” into the value set of each slot to represent that no corresponding value is tracked. 2.2 Contextual Hierarchical Attention Network Recently the pre-trained BERT language model (Devlin et al., 2019) shows powerful ability in universal contextual semantics representation, thus we employ BERT to encode utterances, slots and values. To better retrieve relevant context from dialogue history, we devise Slot-Word Attention and Slot-Turn Attention to query both relevant keywords and turns. Specifically, we exploit a Context Encoder between word-level and turn-level attention to capture contextual representations of relevant information from dialogue history. Furthermore, we devise a Global-Local Fusion Gate to balance the information from global context and local utterances. RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.563. 6324 BERT BERT BERT >&/6@ >6(3@ >6(3@ Rt−1 <latexit sha1_base64="OoKyMv0wuUxy9KXL1mWvO+5ie68=">ACynicjVHLSsNAFD2Nr1pfVZdugkVwY0lE0GXRjQsXVWwr1FKS6bQOTZMwmQgldOcPuNUPE/9A/8I7YwpqE Z2Q5My59yZe68fByJRjvNasObmFxaXisuldW19Y3y5lYziVLJeINFQSRvfC/hgQh5QwkV8JtYcm/kB7zlD890vHXPZSKi8FqNY94ZeYNQ9AXzFGtq26mDtxJt1xqo5Z9ixwc1BvupR+QW36CECQ4oROEIowgE8JPS04cJBTFwHGXGSkDBxjglK5E1JxUnhETuk74B27ZwNa9zJsbN6JSAXklOG3vkiUgnCevTbBNPTWbN/pY7Mzn13cb09/NcI2IV7oj9yzdV/t ena1Ho48TUIKim2DC6OpZnSU1X9M3tL1UpyhATp3GP4pIwM85pn23jSUztureib8ZpWb1nuXaFO/6ljRg9+c4Z0HzsOo6VfyqFI7zUdxA52sU/zPEYN56ijYap8xBOerQtLWmMr+5RahdyzjW/LevgAHM6RvA=</latexit> <latexit sha1_base64="OoKyMv0wuUxy9KXL1mWvO+5ie68=">ACynicjVHLSsNAFD2Nr1pfVZdugkVwY0lE0GXRjQsXVWwr1FKS6bQOTZMwmQgldOcPuNUPE/9A/8I7YwpqE Z2Q5My59yZe68fByJRjvNasObmFxaXisuldW19Y3y5lYziVLJeINFQSRvfC/hgQh5QwkV8JtYcm/kB7zlD890vHXPZSKi8FqNY94ZeYNQ9AXzFGtq26mDtxJt1xqo5Z9ixwc1BvupR+QW36CECQ4oROEIowgE8JPS04cJBTFwHGXGSkDBxjglK5E1JxUnhETuk74B27ZwNa9zJsbN6JSAXklOG3vkiUgnCevTbBNPTWbN/pY7Mzn13cb09/NcI2IV7oj9yzdV/t ena1Ho48TUIKim2DC6OpZnSU1X9M3tL1UpyhATp3GP4pIwM85pn23jSUztureib8ZpWb1nuXaFO/6ljRg9+c4Z0HzsOo6VfyqFI7zUdxA52sU/zPEYN56ijYap8xBOerQtLWmMr+5RahdyzjW/LevgAHM6RvA=</latexit> <latexit sha1_base64="OoKyMv0wuUxy9KXL1mWvO+5ie68=">ACynicjVHLSsNAFD2Nr1pfVZdugkVwY0lE0GXRjQsXVWwr1FKS6bQOTZMwmQgldOcPuNUPE/9A/8I7YwpqE Z2Q5My59yZe68fByJRjvNasObmFxaXisuldW19Y3y5lYziVLJeINFQSRvfC/hgQh5QwkV8JtYcm/kB7zlD890vHXPZSKi8FqNY94ZeYNQ9AXzFGtq26mDtxJt1xqo5Z9ixwc1BvupR+QW36CECQ4oROEIowgE8JPS04cJBTFwHGXGSkDBxjglK5E1JxUnhETuk74B27ZwNa9zJsbN6JSAXklOG3vkiUgnCevTbBNPTWbN/pY7Mzn13cb09/NcI2IV7oj9yzdV/t ena1Ho48TUIKim2DC6OpZnSU1X9M3tL1UpyhATp3GP4pIwM85pn23jSUztureib8ZpWb1nuXaFO/6ljRg9+c4Z0HzsOo6VfyqFI7zUdxA52sU/zPEYN56ijYap8xBOerQtLWmMr+5RahdyzjW/LevgAHM6RvA=</latexit> <latexit sha1_base64="OoKyMv0wuUxy9KXL1mWvO+5ie68=">ACynicjVHLSsNAFD2Nr1pfVZdugkVwY0lE0GXRjQsXVWwr1FKS6bQOTZMwmQgldOcPuNUPE/9A/8I7YwpqE Z2Q5My59yZe68fByJRjvNasObmFxaXisuldW19Y3y5lYziVLJeINFQSRvfC/hgQh5QwkV8JtYcm/kB7zlD890vHXPZSKi8FqNY94ZeYNQ9AXzFGtq26mDtxJt1xqo5Z9ixwc1BvupR+QW36CECQ4oROEIowgE8JPS04cJBTFwHGXGSkDBxjglK5E1JxUnhETuk74B27ZwNa9zJsbN6JSAXklOG3vkiUgnCevTbBNPTWbN/pY7Mzn13cb09/NcI2IV7oj9yzdV/t ena1Ho48TUIKim2DC6OpZnSU1X9M3tL1UpyhATp3GP4pIwM85pn23jSUztureib8ZpWb1nuXaFO/6ljRg9+c4Z0HzsOo6VfyqFI7zUdxA52sU/zPEYN56ijYap8xBOerQtLWmMr+5RahdyzjW/LevgAHM6RvA=</latexit> Ut−1 <latexit sha1_base64="I0brLd0QCZi9rfaKoPAUIrGuhrI=">ACynicjVHLSsNAFD2Nr1pfVZdugkVwY0lE0GXRjQsXFUxbqKUk02kNzYvJRCihO3/ArX6Y+Af6F 94ZU1CL6IQkZ849587ce70k8FNpWa8lY2FxaXmlvFpZW9/Y3Kpu7TSOBOMOywOYtHx3JQHfsQd6cuAdxLB3dALeNsbX6h4+56L1I+jGzlJeC90R5E/9JkriWo7/Vwe2dN+tWbVLb3MeWAXoIZiNePqC24xQAyGDCE4IkjCAVyk9HRhw0JCXA85cYKQr+McU1TIm5GKk8IldkzfEe26BRvRXuVMtZvRKQG9gpwmDsgTk04QVqeZOp7pzIr9LXeuc6q 7TejvFblCYiXuiP3LN1P+16dqkRjiTNfgU02JZlR1rMiS6a6om5tfqpKUISFO4QHFBWGmnbM+m9qT6tpVb10df9NKxao9K7QZ3tUtacD2z3HOg9Zx3bq9vVJrXFejLqMPezjkOZ5igYu0YSjq3zE56NK0MYEyP/lBqlwrOLb8t4+AkAJG/</latexit> <latexit sha1_base64="I0brLd0QCZi9rfaKoPAUIrGuhrI=">ACynicjVHLSsNAFD2Nr1pfVZdugkVwY0lE0GXRjQsXFUxbqKUk02kNzYvJRCihO3/ArX6Y+Af6F 94ZU1CL6IQkZ849587ce70k8FNpWa8lY2FxaXmlvFpZW9/Y3Kpu7TSOBOMOywOYtHx3JQHfsQd6cuAdxLB3dALeNsbX6h4+56L1I+jGzlJeC90R5E/9JkriWo7/Vwe2dN+tWbVLb3MeWAXoIZiNePqC24xQAyGDCE4IkjCAVyk9HRhw0JCXA85cYKQr+McU1TIm5GKk8IldkzfEe26BRvRXuVMtZvRKQG9gpwmDsgTk04QVqeZOp7pzIr9LXeuc6q 7TejvFblCYiXuiP3LN1P+16dqkRjiTNfgU02JZlR1rMiS6a6om5tfqpKUISFO4QHFBWGmnbM+m9qT6tpVb10df9NKxao9K7QZ3tUtacD2z3HOg9Zx3bq9vVJrXFejLqMPezjkOZ5igYu0YSjq3zE56NK0MYEyP/lBqlwrOLb8t4+AkAJG/</latexit> <latexit sha1_base64="I0brLd0QCZi9rfaKoPAUIrGuhrI=">ACynicjVHLSsNAFD2Nr1pfVZdugkVwY0lE0GXRjQsXFUxbqKUk02kNzYvJRCihO3/ArX6Y+Af6F 94ZU1CL6IQkZ849587ce70k8FNpWa8lY2FxaXmlvFpZW9/Y3Kpu7TSOBOMOywOYtHx3JQHfsQd6cuAdxLB3dALeNsbX6h4+56L1I+jGzlJeC90R5E/9JkriWo7/Vwe2dN+tWbVLb3MeWAXoIZiNePqC24xQAyGDCE4IkjCAVyk9HRhw0JCXA85cYKQr+McU1TIm5GKk8IldkzfEe26BRvRXuVMtZvRKQG9gpwmDsgTk04QVqeZOp7pzIr9LXeuc6q 7TejvFblCYiXuiP3LN1P+16dqkRjiTNfgU02JZlR1rMiS6a6om5tfqpKUISFO4QHFBWGmnbM+m9qT6tpVb10df9NKxao9K7QZ3tUtacD2z3HOg9Zx3bq9vVJrXFejLqMPezjkOZ5igYu0YSjq3zE56NK0MYEyP/lBqlwrOLb8t4+AkAJG/</latexit> <latexit sha1_base64="I0brLd0QCZi9rfaKoPAUIrGuhrI=">ACynicjVHLSsNAFD2Nr1pfVZdugkVwY0lE0GXRjQsXFUxbqKUk02kNzYvJRCihO3/ArX6Y+Af6F 94ZU1CL6IQkZ849587ce70k8FNpWa8lY2FxaXmlvFpZW9/Y3Kpu7TSOBOMOywOYtHx3JQHfsQd6cuAdxLB3dALeNsbX6h4+56L1I+jGzlJeC90R5E/9JkriWo7/Vwe2dN+tWbVLb3MeWAXoIZiNePqC24xQAyGDCE4IkjCAVyk9HRhw0JCXA85cYKQr+McU1TIm5GKk8IldkzfEe26BRvRXuVMtZvRKQG9gpwmDsgTk04QVqeZOp7pzIr9LXeuc6q 7TejvFblCYiXuiP3LN1P+16dqkRjiTNfgU02JZlR1rMiS6a6om5tfqpKUISFO4QHFBWGmnbM+m9qT6tpVb10df9NKxao9K7QZ3tUtacD2z3HOg9Zx3bq9vVJrXFejLqMPezjkOZ5igYu0YSjq3zE56NK0MYEyP/lBqlwrOLb8t4+AkAJG/</latexit> Rt <latexit sha1_base64="UyE0kRp2UG4BnLHx729RupGeTig=">ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl02V9AG1lGQ6rUPzIpkoUgR/wK1+mvgH+hfeGaegFtEJSc6 ce8+Zuf6SAy6TivBWtufmFxqbhcWldW98ob261sjhPGW+yOIjTju9lPBARb0ohA95JUu6FfsDb/vhUxds3PM1EHF3Ku4T3Qm8UiaFgniTq4rwv+WKU3X0smeBa0AFZjXi8guMEAMhwhOCJIwgE8ZPR04cJBQlwPE+JSQkLHOe5RIm1OWZwyPGLH9B3RrmvYiPbKM9NqRqcE9KaktLFHmpjyUsLqNFvHc+2s2N+8J9pT3e2O/r7xComVuCb2L908786VYvEMe6BkE 1JZpR1THjkuqJvbX6qS5JAQp/CA4ilhpXTPtak+naVW89HX/TmYpVe2Zyc7yrW9KA3Z/jnAWtg6rVN2zw0rtxIy6iB3sYp/meYQa6migSd4jPOIJz1bdiqzcuv1MtQpGs41vy3r4AFKfkD4=</latexit> <latexit sha1_base64="UyE0kRp2UG4BnLHx729RupGeTig=">ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl02V9AG1lGQ6rUPzIpkoUgR/wK1+mvgH+hfeGaegFtEJSc6 ce8+Zuf6SAy6TivBWtufmFxqbhcWldW98ob261sjhPGW+yOIjTju9lPBARb0ohA95JUu6FfsDb/vhUxds3PM1EHF3Ku4T3Qm8UiaFgniTq4rwv+WKU3X0smeBa0AFZjXi8guMEAMhwhOCJIwgE8ZPR04cJBQlwPE+JSQkLHOe5RIm1OWZwyPGLH9B3RrmvYiPbKM9NqRqcE9KaktLFHmpjyUsLqNFvHc+2s2N+8J9pT3e2O/r7xComVuCb2L908786VYvEMe6BkE 1JZpR1THjkuqJvbX6qS5JAQp/CA4ilhpXTPtak+naVW89HX/TmYpVe2Zyc7yrW9KA3Z/jnAWtg6rVN2zw0rtxIy6iB3sYp/meYQa6migSd4jPOIJz1bdiqzcuv1MtQpGs41vy3r4AFKfkD4=</latexit> <latexit sha1_base64="UyE0kRp2UG4BnLHx729RupGeTig=">ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl02V9AG1lGQ6rUPzIpkoUgR/wK1+mvgH+hfeGaegFtEJSc6 ce8+Zuf6SAy6TivBWtufmFxqbhcWldW98ob261sjhPGW+yOIjTju9lPBARb0ohA95JUu6FfsDb/vhUxds3PM1EHF3Ku4T3Qm8UiaFgniTq4rwv+WKU3X0smeBa0AFZjXi8guMEAMhwhOCJIwgE8ZPR04cJBQlwPE+JSQkLHOe5RIm1OWZwyPGLH9B3RrmvYiPbKM9NqRqcE9KaktLFHmpjyUsLqNFvHc+2s2N+8J9pT3e2O/r7xComVuCb2L908786VYvEMe6BkE 1JZpR1THjkuqJvbX6qS5JAQp/CA4ilhpXTPtak+naVW89HX/TmYpVe2Zyc7yrW9KA3Z/jnAWtg6rVN2zw0rtxIy6iB3sYp/meYQa6migSd4jPOIJz1bdiqzcuv1MtQpGs41vy3r4AFKfkD4=</latexit> <latexit sha1_base64="UyE0kRp2UG4BnLHx729RupGeTig=">ACxnicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl02V9AG1lGQ6rUPzIpkoUgR/wK1+mvgH+hfeGaegFtEJSc6 ce8+Zuf6SAy6TivBWtufmFxqbhcWldW98ob261sjhPGW+yOIjTju9lPBARb0ohA95JUu6FfsDb/vhUxds3PM1EHF3Ku4T3Qm8UiaFgniTq4rwv+WKU3X0smeBa0AFZjXi8guMEAMhwhOCJIwgE8ZPR04cJBQlwPE+JSQkLHOe5RIm1OWZwyPGLH9B3RrmvYiPbKM9NqRqcE9KaktLFHmpjyUsLqNFvHc+2s2N+8J9pT3e2O/r7xComVuCb2L908786VYvEMe6BkE 1JZpR1THjkuqJvbX6qS5JAQp/CA4ilhpXTPtak+naVW89HX/TmYpVe2Zyc7yrW9KA3Z/jnAWtg6rVN2zw0rtxIy6iB3sYp/meYQa6migSd4jPOIJz1bdiqzcuv1MtQpGs41vy3r4AFKfkD4=</latexit> >&/6@ >6(3@ >6(3@ >&/6@ >6(3@ >6(3@ … BERT >&/6@ >6(3@ KRWHOQDPH h1 <latexit sha1_base64="OEzcpbe0RvPBfDFfp6094MT9jF8=">AC0XicjVHL SsNAFD2Nr1pfVZdugkVwVRIRdFl047KifUBbS5JO26F5MZkIpRTErT/gVn9K/AP9C+MKahFdEKSM+fec2buvW7s80Ra1mvOWFhcWl7JrxbW1jc2t4rbO/UkSoXHal7 kR6LpOgnzechqkufNWPBnMD1WcMdnat45aJhEfhtRzHrBM4g5D3uedIom7agSOHbn8ynHYn9rRbLFlSy9zHtgZKCFb1aj4gjZ6iOAhRQCGEJKwDwcJPS3YsBAT18 GEOEGI6zjDFAXSpTFKMhdkTfAe1aGRvSXnkmWu3RKT69gpQmDkgTUZ4grE4zdTzVzor9zXuiPdXdxvR3M6+AWIkhsX/pZpn/1alaJPo41TVwqinWjKrOy1xS3RV1c /NLVZIcYuIU7lFcEPa0ctZnU2sSXbvqraPjbzpTsWrvZbkp3tUtacD2z3HOg/pR2bK9uVxqXKWjTqPezjkOZ5gouUEWNvAUe8YRn48oYG3fG/Weqkcs0u/i2jIcP 76mVPQ=</latexit> <latexit sha1_base64="OEzcpbe0RvPBfDFfp6094MT9jF8=">AC0XicjVHL SsNAFD2Nr1pfVZdugkVwVRIRdFl047KifUBbS5JO26F5MZkIpRTErT/gVn9K/AP9C+MKahFdEKSM+fec2buvW7s80Ra1mvOWFhcWl7JrxbW1jc2t4rbO/UkSoXHal7 kR6LpOgnzechqkufNWPBnMD1WcMdnat45aJhEfhtRzHrBM4g5D3uedIom7agSOHbn8ynHYn9rRbLFlSy9zHtgZKCFb1aj4gjZ6iOAhRQCGEJKwDwcJPS3YsBAT18 GEOEGI6zjDFAXSpTFKMhdkTfAe1aGRvSXnkmWu3RKT69gpQmDkgTUZ4grE4zdTzVzor9zXuiPdXdxvR3M6+AWIkhsX/pZpn/1alaJPo41TVwqinWjKrOy1xS3RV1c /NLVZIcYuIU7lFcEPa0ctZnU2sSXbvqraPjbzpTsWrvZbkp3tUtacD2z3HOg/pR2bK9uVxqXKWjTqPezjkOZ5gouUEWNvAUe8YRn48oYG3fG/Weqkcs0u/i2jIcP 76mVPQ=</latexit> <latexit sha1_base64="OEzcpbe0RvPBfDFfp6094MT9jF8=">AC0XicjVHL SsNAFD2Nr1pfVZdugkVwVRIRdFl047KifUBbS5JO26F5MZkIpRTErT/gVn9K/AP9C+MKahFdEKSM+fec2buvW7s80Ra1mvOWFhcWl7JrxbW1jc2t4rbO/UkSoXHal7 kR6LpOgnzechqkufNWPBnMD1WcMdnat45aJhEfhtRzHrBM4g5D3uedIom7agSOHbn8ynHYn9rRbLFlSy9zHtgZKCFb1aj4gjZ6iOAhRQCGEJKwDwcJPS3YsBAT18 GEOEGI6zjDFAXSpTFKMhdkTfAe1aGRvSXnkmWu3RKT69gpQmDkgTUZ4grE4zdTzVzor9zXuiPdXdxvR3M6+AWIkhsX/pZpn/1alaJPo41TVwqinWjKrOy1xS3RV1c /NLVZIcYuIU7lFcEPa0ctZnU2sSXbvqraPjbzpTsWrvZbkp3tUtacD2z3HOg/pR2bK9uVxqXKWjTqPezjkOZ5gouUEWNvAUe8YRn48oYG3fG/Weqkcs0u/i2jIcP 76mVPQ=</latexit> <latexit sha1_base64="OEzcpbe0RvPBfDFfp6094MT9jF8=">AC0XicjVHL SsNAFD2Nr1pfVZdugkVwVRIRdFl047KifUBbS5JO26F5MZkIpRTErT/gVn9K/AP9C+MKahFdEKSM+fec2buvW7s80Ra1mvOWFhcWl7JrxbW1jc2t4rbO/UkSoXHal7 kR6LpOgnzechqkufNWPBnMD1WcMdnat45aJhEfhtRzHrBM4g5D3uedIom7agSOHbn8ynHYn9rRbLFlSy9zHtgZKCFb1aj4gjZ6iOAhRQCGEJKwDwcJPS3YsBAT18 GEOEGI6zjDFAXSpTFKMhdkTfAe1aGRvSXnkmWu3RKT69gpQmDkgTUZ4grE4zdTzVzor9zXuiPdXdxvR3M6+AWIkhsX/pZpn/1alaJPo41TVwqinWjKrOy1xS3RV1c /NLVZIcYuIU7lFcEPa0ctZnU2sSXbvqraPjbzpTsWrvZbkp3tUtacD2z3HOg/pR2bK9uVxqXKWjTqPezjkOZ5gouUEWNvAUe8YRn48oYG3fG/Weqkcs0u/i2jIcP 76mVPQ=</latexit> ht−1 <latexit sha1_base64="eUBDlwXlinUpxYaeKD1bKXcwX7g=">AC1XicjVHLSsNAFD2Nr1pfUZdugkVwY0lE0GXRjcsK9gFtKUk6bUPzIpkUSuhO3PoDbvWXxD/ Qv/DOAW1iE5Icubce87MvdeJfS/lpvla0JaWV1bXiuljc2t7R19d6+Rlnisrob+VHScuyU+V7I6tzjPmvFCbMDx2dNZ3wl4s0JS1IvCm/5NGbdwB6G3sBzbU5UT9c7gc1HziAfzXo5P7FmPb1sVky5jEVgKVCGWrVIf0EHfURwkSEAQwhO2IeNlJ42LJiIiesiJy4h5Mk4wl0maUxSjDJnZM3yHt2oNaS8U6l26RSf3oSUBo5IE1FeQ licZsh4Jp0F+5t3Lj3F3ab0d5RXQCzHiNi/dPM/+pELRwDXMgaPKoployozlUumeyKuLnxpSpODjFxAvcpnhB2pXLeZ0NqUlm76K0t428yU7Bi76rcDO/iljRg6+c4F0HjtGKZFevmrFy9VKMu4gCHOKZ5nqOKa9RQJ+8JHvGEZ62pzbQ7f4zVSsozT6+Le3hA1gqliM=</latexit> <latexit sha1_base64="eUBDlwXlinUpxYaeKD1bKXcwX7g=">AC1XicjVHLSsNAFD2Nr1pfUZdugkVwY0lE0GXRjcsK9gFtKUk6bUPzIpkUSuhO3PoDbvWXxD/ Qv/DOAW1iE5Icubce87MvdeJfS/lpvla0JaWV1bXiuljc2t7R19d6+Rlnisrob+VHScuyU+V7I6tzjPmvFCbMDx2dNZ3wl4s0JS1IvCm/5NGbdwB6G3sBzbU5UT9c7gc1HziAfzXo5P7FmPb1sVky5jEVgKVCGWrVIf0EHfURwkSEAQwhO2IeNlJ42LJiIiesiJy4h5Mk4wl0maUxSjDJnZM3yHt2oNaS8U6l26RSf3oSUBo5IE1FeQ licZsh4Jp0F+5t3Lj3F3ab0d5RXQCzHiNi/dPM/+pELRwDXMgaPKoployozlUumeyKuLnxpSpODjFxAvcpnhB2pXLeZ0NqUlm76K0t428yU7Bi76rcDO/iljRg6+c4F0HjtGKZFevmrFy9VKMu4gCHOKZ5nqOKa9RQJ+8JHvGEZ62pzbQ7f4zVSsozT6+Le3hA1gqliM=</latexit> <latexit sha1_base64="eUBDlwXlinUpxYaeKD1bKXcwX7g=">AC1XicjVHLSsNAFD2Nr1pfUZdugkVwY0lE0GXRjcsK9gFtKUk6bUPzIpkUSuhO3PoDbvWXxD/ Qv/DOAW1iE5Icubce87MvdeJfS/lpvla0JaWV1bXiuljc2t7R19d6+Rlnisrob+VHScuyU+V7I6tzjPmvFCbMDx2dNZ3wl4s0JS1IvCm/5NGbdwB6G3sBzbU5UT9c7gc1HziAfzXo5P7FmPb1sVky5jEVgKVCGWrVIf0EHfURwkSEAQwhO2IeNlJ42LJiIiesiJy4h5Mk4wl0maUxSjDJnZM3yHt2oNaS8U6l26RSf3oSUBo5IE1FeQ licZsh4Jp0F+5t3Lj3F3ab0d5RXQCzHiNi/dPM/+pELRwDXMgaPKoployozlUumeyKuLnxpSpODjFxAvcpnhB2pXLeZ0NqUlm76K0t428yU7Bi76rcDO/iljRg6+c4F0HjtGKZFevmrFy9VKMu4gCHOKZ5nqOKa9RQJ+8JHvGEZ62pzbQ7f4zVSsozT6+Le3hA1gqliM=</latexit> <latexit sha1_base64="eUBDlwXlinUpxYaeKD1bKXcwX7g=">AC1XicjVHLSsNAFD2Nr1pfUZdugkVwY0lE0GXRjcsK9gFtKUk6bUPzIpkUSuhO3PoDbvWXxD/ Qv/DOAW1iE5Icubce87MvdeJfS/lpvla0JaWV1bXiuljc2t7R19d6+Rlnisrob+VHScuyU+V7I6tzjPmvFCbMDx2dNZ3wl4s0JS1IvCm/5NGbdwB6G3sBzbU5UT9c7gc1HziAfzXo5P7FmPb1sVky5jEVgKVCGWrVIf0EHfURwkSEAQwhO2IeNlJ42LJiIiesiJy4h5Mk4wl0maUxSjDJnZM3yHt2oNaS8U6l26RSf3oSUBo5IE1FeQ licZsh4Jp0F+5t3Lj3F3ab0d5RXQCzHiNi/dPM/+pELRwDXMgaPKoployozlUumeyKuLnxpSpODjFxAvcpnhB2pXLeZ0NqUlm76K0t428yU7Bi76rcDO/iljRg6+c4F0HjtGKZFevmrFy9VKMu4gCHOKZ5nqOKa9RQJ+8JHvGEZ62pzbQ7f4zVSsozT6+Le3hA1gqliM=</latexit> ht <latexit sha1_base64="FEJtG0Hxtk0XxhINIS3KxXcaRic=">AC0XicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl047KifUBbS5JO26F5MZkIpRTErT/g Vn9K/AP9C+MKahFdEKSM+fec2buvW7s80Ra1mvOWFhcWl7JrxbW1jc2t4rbO/UkSoXHal7kR6LpOgnzechqkufNWPBnMD1WcMdnat45aJhEfhtRzHrBM4g5D3uedIom7agSOHbn8ynHYnctotlqypZc5D+wMlJCtalR8QRs9RPCQIgBDCEnYh4OEnhZsWIiJ62BCnCDEdZxhigJpU8pilOEQO6LvgHatjA1przwTrfb oFJ9eQUoTB6SJKE8QVqeZOp5qZ8X+5j3RnupuY/q7mVdArMSQ2L90s8z/6lQtEn2c6ho41RrRlXnZS6p7oq6ufmlKkOMXEK9yguCHtaOeuzqTWJrl31tHxN52pWLX3stwU7+qWNGD75zjnQf2obFtl+/K4VDnLRp3HvZxSPM8QUXqKJG3gKPeMKzcWMjTvj/jPVyGWaXxbxsMHjxuVgA=</latexit> <latexit sha1_base64="FEJtG0Hxtk0XxhINIS3KxXcaRic=">AC0XicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl047KifUBbS5JO26F5MZkIpRTErT/g Vn9K/AP9C+MKahFdEKSM+fec2buvW7s80Ra1mvOWFhcWl7JrxbW1jc2t4rbO/UkSoXHal7kR6LpOgnzechqkufNWPBnMD1WcMdnat45aJhEfhtRzHrBM4g5D3uedIom7agSOHbn8ynHYnctotlqypZc5D+wMlJCtalR8QRs9RPCQIgBDCEnYh4OEnhZsWIiJ62BCnCDEdZxhigJpU8pilOEQO6LvgHatjA1przwTrfb oFJ9eQUoTB6SJKE8QVqeZOp5qZ8X+5j3RnupuY/q7mVdArMSQ2L90s8z/6lQtEn2c6ho41RrRlXnZS6p7oq6ufmlKkOMXEK9yguCHtaOeuzqTWJrl31tHxN52pWLX3stwU7+qWNGD75zjnQf2obFtl+/K4VDnLRp3HvZxSPM8QUXqKJG3gKPeMKzcWMjTvj/jPVyGWaXxbxsMHjxuVgA=</latexit> <latexit sha1_base64="FEJtG0Hxtk0XxhINIS3KxXcaRic=">AC0XicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl047KifUBbS5JO26F5MZkIpRTErT/g Vn9K/AP9C+MKahFdEKSM+fec2buvW7s80Ra1mvOWFhcWl7JrxbW1jc2t4rbO/UkSoXHal7kR6LpOgnzechqkufNWPBnMD1WcMdnat45aJhEfhtRzHrBM4g5D3uedIom7agSOHbn8ynHYnctotlqypZc5D+wMlJCtalR8QRs9RPCQIgBDCEnYh4OEnhZsWIiJ62BCnCDEdZxhigJpU8pilOEQO6LvgHatjA1przwTrfb oFJ9eQUoTB6SJKE8QVqeZOp5qZ8X+5j3RnupuY/q7mVdArMSQ2L90s8z/6lQtEn2c6ho41RrRlXnZS6p7oq6ufmlKkOMXEK9yguCHtaOeuzqTWJrl31tHxN52pWLX3stwU7+qWNGD75zjnQf2obFtl+/K4VDnLRp3HvZxSPM8QUXqKJG3gKPeMKzcWMjTvj/jPVyGWaXxbxsMHjxuVgA=</latexit> <latexit sha1_base64="FEJtG0Hxtk0XxhINIS3KxXcaRic=">AC0XicjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl047KifUBbS5JO26F5MZkIpRTErT/g Vn9K/AP9C+MKahFdEKSM+fec2buvW7s80Ra1mvOWFhcWl7JrxbW1jc2t4rbO/UkSoXHal7kR6LpOgnzechqkufNWPBnMD1WcMdnat45aJhEfhtRzHrBM4g5D3uedIom7agSOHbn8ynHYnctotlqypZc5D+wMlJCtalR8QRs9RPCQIgBDCEnYh4OEnhZsWIiJ62BCnCDEdZxhigJpU8pilOEQO6LvgHatjA1przwTrfb oFJ9eQUoTB6SJKE8QVqeZOp5qZ8X+5j3RnupuY/q7mVdArMSQ2L90s8z/6lQtEn2c6ho41RrRlXnZS6p7oq6ufmlKkOMXEK9yguCHtaOeuzqTWJrl31tHxN52pWLX3stwU7+qWNGD75zjnQf2obFtl+/K4VDnLRp3HvZxSPM8QUXqKJG3gKPeMKzcWMjTvj/jPVyGWaXxbxsMHjxuVgA=</latexit> Slot-Word Attention Context Encoder Slot-Turn Attention + PE BERT >&/6@ >6(3@ DXWXPQKRXVH hv t <latexit sha1_base64="Qr Y6j450pJMN18BS7OQ4AjNmNzo=">AC13icjVHLSsNAFD2 Nr1pfsS7dBIvgqiQi6LoxmUF+5C2liSdtqF5kUyKJR34t YfcKt/JP6B/oV3xhTUIjohyZlz7zkz914rdJ2Y6/prTlY XFpeya8W1tY3NrfU7WI9DpLIZjU7cIOoaZkxcx2f1bjDXdY MI2Z6lsa1uhMxBtjFsVO4F/yScg6njnwnb5jm5yorlpsey YfWv10OL1Ox9NuyqdtaSXdbm0eWBkoIRsVQP1BW30EMBGA g8MPjhFyZielowoCMkroOUuIiQI+MUxRIm1AWowyT2BF9 B7RrZaxPe+EZS7VNp7j0RqTUsE+agPIiwuI0TcYT6SzY37 xT6SnuNqG/lXl5xHIMif1LN8v8r07UwtHiazBoZpCyYjq7 MwlkV0RN9e+VMXJISRO4B7FI8K2VM76rElNLGsXvTVl/E1m Clbs7Sw3wbu4JQ3Y+DnOeVA/LBt62bg4KlVOs1HnsYs9HNA 8j1HBOaqokfcNHvGEZ+VKuVXulPvPVCWXaXbwbSkPHxiNl6 U=</latexit> <latexit sha1_base64="Qr Y6j450pJMN18BS7OQ4AjNmNzo=">AC13icjVHLSsNAFD2 Nr1pfsS7dBIvgqiQi6LoxmUF+5C2liSdtqF5kUyKJR34t YfcKt/JP6B/oV3xhTUIjohyZlz7zkz914rdJ2Y6/prTlY XFpeya8W1tY3NrfU7WI9DpLIZjU7cIOoaZkxcx2f1bjDXdY MI2Z6lsa1uhMxBtjFsVO4F/yScg6njnwnb5jm5yorlpsey YfWv10OL1Ox9NuyqdtaSXdbm0eWBkoIRsVQP1BW30EMBGA g8MPjhFyZielowoCMkroOUuIiQI+MUxRIm1AWowyT2BF9 B7RrZaxPe+EZS7VNp7j0RqTUsE+agPIiwuI0TcYT6SzY37 xT6SnuNqG/lXl5xHIMif1LN8v8r07UwtHiazBoZpCyYjq7 MwlkV0RN9e+VMXJISRO4B7FI8K2VM76rElNLGsXvTVl/E1m Clbs7Sw3wbu4JQ3Y+DnOeVA/LBt62bg4KlVOs1HnsYs9HNA 8j1HBOaqokfcNHvGEZ+VKuVXulPvPVCWXaXbwbSkPHxiNl6 U=</latexit> <latexit sha1_base64="Qr Y6j450pJMN18BS7OQ4AjNmNzo=">AC13icjVHLSsNAFD2 Nr1pfsS7dBIvgqiQi6LoxmUF+5C2liSdtqF5kUyKJR34t YfcKt/JP6B/oV3xhTUIjohyZlz7zkz914rdJ2Y6/prTlY XFpeya8W1tY3NrfU7WI9DpLIZjU7cIOoaZkxcx2f1bjDXdY MI2Z6lsa1uhMxBtjFsVO4F/yScg6njnwnb5jm5yorlpsey YfWv10OL1Ox9NuyqdtaSXdbm0eWBkoIRsVQP1BW30EMBGA g8MPjhFyZielowoCMkroOUuIiQI+MUxRIm1AWowyT2BF9 B7RrZaxPe+EZS7VNp7j0RqTUsE+agPIiwuI0TcYT6SzY37 xT6SnuNqG/lXl5xHIMif1LN8v8r07UwtHiazBoZpCyYjq7 MwlkV0RN9e+VMXJISRO4B7FI8K2VM76rElNLGsXvTVl/E1m Clbs7Sw3wbu4JQ3Y+DnOeVA/LBt62bg4KlVOs1HnsYs9HNA 8j1HBOaqokfcNHvGEZ+VKuVXulPvPVCWXaXbwbSkPHxiNl6 U=</latexit> <latexit sha1_base64="Qr Y6j450pJMN18BS7OQ4AjNmNzo=">AC13icjVHLSsNAFD2 Nr1pfsS7dBIvgqiQi6LoxmUF+5C2liSdtqF5kUyKJR34t YfcKt/JP6B/oV3xhTUIjohyZlz7zkz914rdJ2Y6/prTlY XFpeya8W1tY3NrfU7WI9DpLIZjU7cIOoaZkxcx2f1bjDXdY MI2Z6lsa1uhMxBtjFsVO4F/yScg6njnwnb5jm5yorlpsey YfWv10OL1Ox9NuyqdtaSXdbm0eWBkoIRsVQP1BW30EMBGA g8MPjhFyZielowoCMkroOUuIiQI+MUxRIm1AWowyT2BF9 B7RrZaxPe+EZS7VNp7j0RqTUsE+agPIiwuI0TcYT6SzY37 xT6SnuNqG/lXl5xHIMif1LN8v8r07UwtHiazBoZpCyYjq7 MwlkV0RN9e+VMXJISRO4B7FI8K2VM76rElNLGsXvTVl/E1m Clbs7Sw3wbu4JQ3Y+DnOeVA/LBt62bg4KlVOs1HnsYs9HNA 8j1HBOaqokfcNHvGEZ+VKuVXulPvPVCWXaXbwbSkPHxiNl6 U=</latexit> Fixed Fine tune Masked Multihead Self-Attention Add & Norm Linear Feed Forward Add & Norm ⇥N <latexit sha1_base64="n9w3+ntHjow2Fg3 OaTc8Li0PCQA=">ACy3icjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl040apYB9QiyTptA7Ni5 mJUKtLf8Ct/pf4B/oX3hlTUIvohCRnzj3nztx7/TkUjnOa8GamZ2bXygulpaWV1bXyusbTZ lkImCNIAkT0fY9yUIes4biKmTtVDAv8kPW8ofHOt6YULyJL5Qo5R1I28Q8z4PEVU+1LxiEn 7KpcaqOWfY0cHNQb7qSfkFl+ghQYAMERhiKMIhPEh6OnDhICWuizFxghA3cYZ7lMibkYqR wiN2SN8B7To5G9Ne5TGHdApIb2CnDZ2yJOQThDWp9kmnpnMmv0t9jk1Hcb0d/Pc0XEKlwT +5dvovyvT9ei0MehqYFTalhdHVBniUzXdE3t79UpShDSpzGPYoLwoFxTvpsG480teveib+Z pSa1fsg12Z417ekAbs/xzkNmntV16m65/uV2lE+6iK2sI1dmucBajhBHQ0zx0c84dk6taR1a9 19Sq1C7tnEt2U9fABSoZI3</latexit> <latexit sha1_base64="n9w3+ntHjow2Fg3 OaTc8Li0PCQA=">ACy3icjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl040apYB9QiyTptA7Ni5 mJUKtLf8Ct/pf4B/oX3hlTUIvohCRnzj3nztx7/TkUjnOa8GamZ2bXygulpaWV1bXyusbTZ lkImCNIAkT0fY9yUIes4biKmTtVDAv8kPW8ofHOt6YULyJL5Qo5R1I28Q8z4PEVU+1LxiEn 7KpcaqOWfY0cHNQb7qSfkFl+ghQYAMERhiKMIhPEh6OnDhICWuizFxghA3cYZ7lMibkYqR wiN2SN8B7To5G9Ne5TGHdApIb2CnDZ2yJOQThDWp9kmnpnMmv0t9jk1Hcb0d/Pc0XEKlwT +5dvovyvT9ei0MehqYFTalhdHVBniUzXdE3t79UpShDSpzGPYoLwoFxTvpsG480teveib+Z pSa1fsg12Z417ekAbs/xzkNmntV16m65/uV2lE+6iK2sI1dmucBajhBHQ0zx0c84dk6taR1a9 19Sq1C7tnEt2U9fABSoZI3</latexit> <latexit sha1_base64="n9w3+ntHjow2Fg3 OaTc8Li0PCQA=">ACy3icjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl040apYB9QiyTptA7Ni5 mJUKtLf8Ct/pf4B/oX3hlTUIvohCRnzj3nztx7/TkUjnOa8GamZ2bXygulpaWV1bXyusbTZ lkImCNIAkT0fY9yUIes4biKmTtVDAv8kPW8ofHOt6YULyJL5Qo5R1I28Q8z4PEVU+1LxiEn 7KpcaqOWfY0cHNQb7qSfkFl+ghQYAMERhiKMIhPEh6OnDhICWuizFxghA3cYZ7lMibkYqR wiN2SN8B7To5G9Ne5TGHdApIb2CnDZ2yJOQThDWp9kmnpnMmv0t9jk1Hcb0d/Pc0XEKlwT +5dvovyvT9ei0MehqYFTalhdHVBniUzXdE3t79UpShDSpzGPYoLwoFxTvpsG480teveib+Z pSa1fsg12Z417ekAbs/xzkNmntV16m65/uV2lE+6iK2sI1dmucBajhBHQ0zx0c84dk6taR1a9 19Sq1C7tnEt2U9fABSoZI3</latexit> <latexit sha1_base64="n9w3+ntHjow2Fg3 OaTc8Li0PCQA=">ACy3icjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFl040apYB9QiyTptA7Ni5 mJUKtLf8Ct/pf4B/oX3hlTUIvohCRnzj3nztx7/TkUjnOa8GamZ2bXygulpaWV1bXyusbTZ lkImCNIAkT0fY9yUIes4biKmTtVDAv8kPW8ofHOt6YULyJL5Qo5R1I28Q8z4PEVU+1LxiEn 7KpcaqOWfY0cHNQb7qSfkFl+ghQYAMERhiKMIhPEh6OnDhICWuizFxghA3cYZ7lMibkYqR wiN2SN8B7To5G9Ne5TGHdApIb2CnDZ2yJOQThDWp9kmnpnMmv0t9jk1Hcb0d/Pc0XEKlwT +5dvovyvT9ei0MehqYFTalhdHVBniUzXdE3t79UpShDSpzGPYoLwoFxTvpsG480teveib+Z pSa1fsg12Z417ekAbs/xzkNmntV16m65/uV2lE+6iK2sI1dmucBajhBHQ0zx0c84dk6taR1a9 19Sq1C7tnEt2U9fABSoZI3</latexit> State Transition Prediction Slot-Word Attention Slot-Word Attention Concat … cctx s,t <latexit sha1_base64="se3R8YwymFoLrI2h6IwuGJwmLA4=">AC4HicjVHLSsNAFD2N7/quwmWAQXUhIRdFl047KCfUBbSzJOdTAvk4koQt37sStP+BWv0b 8A/0L74wR1CI6IcmZc+85M/deN/JEIi3rpWCMjU9MTk3PFGfn5hcWS0vLzSRMY8YbLPTCuO06CfdEwBtSI+3o5g7vuvxlnu2p+KtCx4nIgwO5VXEe75zEoiBYI4kql8qd31HnrqDjA2PMiYvh/0s2eh6/NyUw36pYlUtvcxRYOegnzVw9IzujhGCIYUPjgCSMIeHCT0dGDQkRcDxlxMSGh4xDFEmbUhanDIfYM/qe0K6TswHtlWei1YxO8 eiNSWlijTQh5cWE1WmjqfaWbG/eWfaU93tiv5u7uUTK3FK7F+6z8z/6lQtEgPs6BoE1RpRlXHcpdUd0Xd3PxSlSHiDiFjykeE2Za+dlnU2sSXbvqraPjrzpTsWrP8twUb+qWNGD75zhHQXOzaltV+2CrUtvNRz2NMlaxTvPcRg37qKNB3td4wCOeDNe4MW6Nu49Uo5BrVvBtGfvVOKbMA=</latexit> <latexit sha1_base64="se3R8YwymFoLrI2h6IwuGJwmLA4=">AC4HicjVHLSsNAFD2N7/quwmWAQXUhIRdFl047KCfUBbSzJOdTAvk4koQt37sStP+BWv0b 8A/0L74wR1CI6IcmZc+85M/deN/JEIi3rpWCMjU9MTk3PFGfn5hcWS0vLzSRMY8YbLPTCuO06CfdEwBtSI+3o5g7vuvxlnu2p+KtCx4nIgwO5VXEe75zEoiBYI4kql8qd31HnrqDjA2PMiYvh/0s2eh6/NyUw36pYlUtvcxRYOegnzVw9IzujhGCIYUPjgCSMIeHCT0dGDQkRcDxlxMSGh4xDFEmbUhanDIfYM/qe0K6TswHtlWei1YxO8 eiNSWlijTQh5cWE1WmjqfaWbG/eWfaU93tiv5u7uUTK3FK7F+6z8z/6lQtEgPs6BoE1RpRlXHcpdUd0Xd3PxSlSHiDiFjykeE2Za+dlnU2sSXbvqraPjrzpTsWrP8twUb+qWNGD75zhHQXOzaltV+2CrUtvNRz2NMlaxTvPcRg37qKNB3td4wCOeDNe4MW6Nu49Uo5BrVvBtGfvVOKbMA=</latexit> <latexit sha1_base64="se3R8YwymFoLrI2h6IwuGJwmLA4=">AC4HicjVHLSsNAFD2N7/quwmWAQXUhIRdFl047KCfUBbSzJOdTAvk4koQt37sStP+BWv0b 8A/0L74wR1CI6IcmZc+85M/deN/JEIi3rpWCMjU9MTk3PFGfn5hcWS0vLzSRMY8YbLPTCuO06CfdEwBtSI+3o5g7vuvxlnu2p+KtCx4nIgwO5VXEe75zEoiBYI4kql8qd31HnrqDjA2PMiYvh/0s2eh6/NyUw36pYlUtvcxRYOegnzVw9IzujhGCIYUPjgCSMIeHCT0dGDQkRcDxlxMSGh4xDFEmbUhanDIfYM/qe0K6TswHtlWei1YxO8 eiNSWlijTQh5cWE1WmjqfaWbG/eWfaU93tiv5u7uUTK3FK7F+6z8z/6lQtEgPs6BoE1RpRlXHcpdUd0Xd3PxSlSHiDiFjykeE2Za+dlnU2sSXbvqraPjrzpTsWrP8twUb+qWNGD75zhHQXOzaltV+2CrUtvNRz2NMlaxTvPcRg37qKNB3td4wCOeDNe4MW6Nu49Uo5BrVvBtGfvVOKbMA=</latexit> <latexit sha1_base64="se3R8YwymFoLrI2h6IwuGJwmLA4=">AC4HicjVHLSsNAFD2N7/quwmWAQXUhIRdFl047KCfUBbSzJOdTAvk4koQt37sStP+BWv0b 8A/0L74wR1CI6IcmZc+85M/deN/JEIi3rpWCMjU9MTk3PFGfn5hcWS0vLzSRMY8YbLPTCuO06CfdEwBtSI+3o5g7vuvxlnu2p+KtCx4nIgwO5VXEe75zEoiBYI4kql8qd31HnrqDjA2PMiYvh/0s2eh6/NyUw36pYlUtvcxRYOegnzVw9IzujhGCIYUPjgCSMIeHCT0dGDQkRcDxlxMSGh4xDFEmbUhanDIfYM/qe0K6TswHtlWei1YxO8 eiNSWlijTQh5cWE1WmjqfaWbG/eWfaU93tiv5u7uUTK3FK7F+6z8z/6lQtEgPs6BoE1RpRlXHcpdUd0Xd3PxSlSHiDiFjykeE2Za+dlnU2sSXbvqraPjrzpTsWrP8twUb+qWNGD75zhHQXOzaltV+2CrUtvNRz2NMlaxTvPcRg37qKNB3td4wCOeDNe4MW6Nu49Uo5BrVvBtGfvVOKbMA=</latexit> Joint training Adaptive Objective Fine tuning slot value Figure 1: The architecture of our model. At turn t, the slot retrieves relevant information among {1, ..., t} turns at both word level and turn level. Specifically, we utilize a context encoder between word level and turn level to capture the relationships between historical relevant information. Finally, we combine the global relevant context cturn s,t and local dialogue information cword s,t as outputs. During training, we first train the DST task and the state transition prediction task jointly, then fine-tune our model with the adaptive objective. Sentence Encoder. BERT leverages a special token [CLS] to aggregate the whole representation of a sentence and a special token [SEP] to indicate the end of a sentence. For user utterance Ut = {wu 1, ..., wu l } and system response Rt = {wr 1, ..., wr l′} at dialogue turn t, we concatenate them with special tokens and encode them into contextual word representations ht as follows: ht = BERTfinetune([Rt; Ut]) (1) where BERTfinetune means that it will be finetuned during training. Therefore, BERTfinetune will learn a corresponding generalization of sentence representations and adapt to dialogue state tracking task. For slot s and value vt, we adopt another pretrained BERTfixed to encode them into contextual semantics vectors hs and hv t respectively. Different from utterances, we use the output vector of the special token [CLS] to obtain the whole sentence representation: hs = BERTfixed(s) (2) hv t = BERTfixed(vt) where the weights of BERTfixed are fixed during training thus our model can be scalable to any unseen slots and values with sharing the original BERT representation. Slot-Word Attention. The slot-word attention is a multi-head attention (MultiHead(Q, K, V)), which takes a query matrix Q, a key matrix K and a value matrix V as inputs. Refer to (Vaswani et al., 2017) for more details. For each slot s, the slotword attention summarizes word-level slot-related information from each turn t into a d-dimensional vector cword s,t , which can be determined as follows: cword s,t = MultiHead(hs, ht, ht) (3) Context Encoder. The context encoder is a unidirectional transformer encoder, which is devised to model the contextual relevance of the extracted word-level slot-related information among {1, ..., t} turns. The context encoder contains a stack of N identical layers. Each layer has two sub-layers. The first sub-layer is a masked multi-head self-attention (MultiHead), in which Q = K = V. The second sub-layer is a positionwise fully connected feed-forward network (FFN), which consists of two linear transformations with a ReLU activation (Vaswani et al., 2017). FFN(x) = max(0, xW1 + b1)W2 + b2 (4) Formally, the output of the context encoder cctx s,≤t can be denoted as follows: mn =FFN(MultiHead(mn−1, mn−1, mn−1)) m0 =[cword s,1 + PE(1), ..., cword s,t + PE(t)] cctx s,≤t = mN (5) RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.563. 6325 where mn is the output of the n-th layer of context encoder and PE(·) denotes positional encoding function. Note that residual connection and layer normalization are omitted in the formula. Slot-Turn Attention. To retrieve turn-level relevant information from contextual representation, we devise a slot-turn attention which is the multihead attention as follows: cturn s,t = MultiHead(hs, cctx s,≤t, cctx s,≤t) (6) Therefore, the model can access word-level and turn-level relevant information from the historical dialogues. Global-Local Fusion Gate. To balance the information of global context and local utterances, we propose to dynamically control each proportion of contextual information and current turn information so that the model can not only benefit from relevant context but also keep a balance between global and local representations. Similar to Hochreiter and Schmidhuber (1997), we leverage a fusion gate mechanism, which computes a weight to decide how much global and local information should be combined according to cword s,t and cturn s,t . It can be defined as follows: gs,t = σ(Wg ⊙[cword s,t ; cturn s,t ]) (7) cgate s,t = gs,t ⊗cword s,t + (1 −gs,t) ⊗cturn s,t where Wg ∈R2d×d are parameters, σ means sigmoid activation function, ⊙and ⊗mean the pointwise and element-wise multiplication respectively. Finally, we use a linear projection to obtain query results with layer normalization and dropout: os,t =LayerNorm(Linear(Dropout(cgate s,t )))(8) We follow Ren et al. (2018) to adopt L2 norm to compute the distance. Therefore, the probability distribution of value vt and the training objective can be defined as: p(vt|U≤t, R≤t, s) = exp(−∥os,t−hv t ∥2) P v′∈Vs exp(−∥os,t−hv′ t ∥2) Ldst = P s∈S TP t=1 −log(p(ˆvt|U≤t, R≤t, s)) (9) where Vs is the candidate value set of slot s and ˆvt ∈Vs is the ground-truth value of slot s. 2.3 State Transition Prediction To better capture relevant context, we further introduce an auxiliary binary classification task to jointly train with DST: State Transition Prediction (STP), which is to predict if the value for a slot is updated compared to previous turn. This module reads cgate s,t−1 and cgate s,t as inputs and the transition probability pstp s,t can be calculated as follows: cstp s,t = tanh(Wc ⊙cgate s,t ) (10) pstp s,t = σ(Wp ⊙[cstp s,t ; cstp s,t−1]) where Wc ∈Rd×d, Wp ∈R2d are parameters. Note that when t = 1, we simply concatenate cstp s,t with zero vectors. For this task, we calculate the binary cross entropy loss between ground-truth transition labels ystp s,t and the transition probability pstp s,t , which is defined as follows: Lstp = X s∈S T X t=1 −ystp s,t · log(pstp s,t ) (11) 2.4 Adaptive Objective Essentially, the slot imbalance problem can be considered as a kind of class imbalance because there is an imbalance among both different slots and different samples. Instead of treating all slots indiscriminately, it is important to balance the learning of different slots. Recently, Lin et al. (2017) propose a soft-sampling method, Focal Loss, to re-weight the losses of different classes. Inspired by their work, we design a novel adaptive objective for DST which evaluates the difficulty from each slot’s accuracy on the validation set and adaptively adjusts the weight of each slot during optimization. We define the accuracy of slot s on validation set as accval s . Our adaptive objective is based on the following intuitions: (1) If accval s ≤accval s′ ; then slot s is more difficult than slot s′. Suppose this slot-level difficulty is defined as α; then αs = 1 −accval s P s′∈S 1 −accval s′ · |S| (12) (2) Suppose there are two samples {(Ut, Rt), (s, vt)} and {(Ut′, Rt′), (s′, vt′)}. If the former confidence is lower than the latter, then sample {(Ut, Rt), (s, vt)} is more difficult than {(Ut′, Rt′), (s′, vt′)}. Suppose this samplelevel difficulty is defined as β; then β(s, vt) = (1 −p(s, vt))γ (13) where p(s, vt) is the confidence of sample {(Ut, Rt), (s, vt)} and γ is a hyper-parameter. RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.563. 6326 Thus, the adaptive objective is defined as follows: Ladapt(s, vt) = −αsβ(s, vt) log(p(s, vt)) (14) Focal Loss assigns static learning weights on slots and doesn’t change them anymore during the whole training. Compared to Focal Loss, our adaptive objective can fit data better by dynamically evaluate the difficulties in an accuracy-sensitive manner and then adaptively control the learning weights for different slots, which is proved in our experiments. If the difficulty of slot s is greater than the average difficulty of all slots, αs would increase and enlarge the loss of s. Similarly, the optimization of sample {(Ut, Rt), (s, vt)} with a low confidence p(s, vt) would be encouraged by a larger loss. When an epoch ends, the adaptive objective re-evaluates the difficulty of each slot and updates αs. Therefore, it can not only encourage the optimization of those hard slots and samples but also balance the learning of all slots. 2.5 Optimization In our model, we firstly jointly train the DST and STP tasks to convergence and then fine-tune DST task with the adaptive objective. During joint training, we optimize the sum of these two loss functions as following: Ljoint = Ldst + Lstp (15) At the fine-tuning phase, we adopt the adaptive objective to fine-tune DST task as following: Lfinetune = X s∈S T X t=1 Ladapt(s, ˆvt) (16) 3 Experiments Setup 3.1 Datasets & Metrics Hotel Train Attraction Restaurant Taxi Slots price, type, parking, stay, day, people, area, stars, internet, name destination, departure, day, arrive by, leave at, people area, name, type food, price, area, name, time, day, people destination, departure, arrive by, leave by Train 3381 3103 2717 3813 1654 Valid 416 484 401 438 207 Test 394 494 395 437 195 Table 2: The dataset statistics of MultiWOZ 2.0 & 2.1. We evaluate our model on MultiWOZ 2.0 (Budzianowski et al., 2018) and MultiWOZ 2.1 (Eric et al., 2019), which are two of the largest public task-oriented dialogue datasets, including about 10,000 dialogues with 7 domains and 35 domain-slot pairs. MultiWOZ 2.1 shares the same dialogues with MultiWOZ 2.0 but it fixed previous annotation errors. The statistics are shown in Table 2. Following (Wu et al., 2019), we use only 5 domains {restaurant, hotel, train, attraction, taxi} excluding hospital and police since these two domains never occur in the test set. We preprocess the datasets following (Lee et al., 2019)2. We use joint accuracy and slot accuracy as our evaluation metrics. Joint accuracy is the accuracy of the dialogue state of each turn and a dialogue state is evaluated correctly only if all the values of slots are correctly predicted. Slot accuracy only considers individual slot-level accuracy. 3.2 Baseline Models We compare our results with the following competitive baselines: DSTreader proposes to model DST as a machine reading comprehension task and extract spans from dialogue history (Gao et al., 2019b). GLAD-RCFS uses a heuristic rule to extract relevant turns and lets slot-value pairs to query relevant context from them (Sharma et al., 2019). HyST employs a hierarchical encoder and takes a hybrid way combining both predefined-ontology and open-vocabulary settings (Goel et al., 2019). TRADE encodes the whole dialogue context and decodes the value for every slot using a copyaugmented decoder (Wu et al., 2019). DST-QA proposes to model DST as a question answering problem and uses a dynamically-evolving knowledge graph to learn relationships between slot pairs (Zhou and Small, 2019). SOM-DST considers the dialogue state as an explicit fixed-size memory and proposes a selectively overwriting mechanism (Kim et al., 2019). SUMBT exploits BERT as the encoder of the utterances, slots and values. It scores every candidate slot-value pair in a non-parametric manner using a distance measurement (Lee et al., 2019). DST-picklist performs matchings between candidate values and slot-context encoding considering all slots as picklist-based slots (Zhang et al., 2019). GLAD-RCFS, HyST, SUMBT, DST-picklist are predefined-ontology models as well as our model and DSTreader, TRADE, DST-QA, SOM-DST are open-vocabulary models. 2https://github.com/SKTBrain/SUMBT RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.563. 6327 Model Ontology MultiWOZ 2.0 MultiWOZ 2.1 Joint (%) Slot (%) Joint (%) Slot (%) DSTreader (Gao et al., 2019b) × 39.41 36.40⋆ GLAD-RCFS (Sharma et al., 2019) ✓ 46.31 HyST (Goel et al., 2019) ✓ 42.33 38.10⋆ TRADE (Wu et al., 2019) × 48.60 96.92 45.60⋆ DST-QA (Zhou and Small, 2019) × 51.44 97.24 51.17 97.21 SOM-DST (Kim et al., 2019) × 51.38 52.57 SUMBT (Lee et al., 2019) ✓ 48.81† 97.33† 52.75‡ 97.56‡ DST-picklist (Zhang et al., 2019) ✓ 53.30 Our Model ✓ 52.68 97.69 58.55 98.14 Table 3: Joint accuracy & slot accuracy on the test sets of MultiWOZ 2.0 and 2.1. The ontology column indicates if a model is based on predefined ontology or not. † means the updated results on SUMBT’s GitHub2 and ‡ means our reproduction results using source code of SUMBT 2. ⋆means we borrow results from (Eric et al., 2019). 3.3 Settings We employ the pre-trained BERT model that has 12 layers of 784 hidden units and 12 self-attention heads 3. For the multi-head attention, we set heads count and hidden size to 4 and 784, respectively. For the context encoder, we set the transformer layers to 6. We set the max sequence length of all inputs to 64 and the batch size to 32. In all training, we use Adam optimizer (Kingma and Ba, 2015) and set the warmup proportion to 0.1. Specifically, in the joint training phase, we set the peak learning rate to 1e-4. At the fine-tuning phase, we set γ to 2, peak learning rate to 1e-5. The training stopped early when the validation loss was not improved for 15 consecutive epochs. For all experiments, we report the mean joint accuracy over multiple different random seeds to reduce statistical errors. 4 Experiment Results 4.1 Main Results Table 3 shows the joint accuracy of our model and other baselines on the test sets of MultiWOZ 2.0 and 2.1. Our model beats all baselines whether they are based on predefined ontology or open vocabulary, and achieves 52.68% and 58.55% joint accuracy with considerable improvements (1.24% and 5.98%) over previous best results on MultiWOZ 2.0 and 2.1, respectively. Also, our model achieves 97.69% and 98.14% slot accuracy with 0.36% and 0.58% improvements over the previous best results on MultiWOZ 2.0 and 2.1, respectively. Similar to (Kim et al., 2019), we find that our model achieves much higher improvements on MultiWOZ 2.1 than 3It is published as bert-base-uncased model in https://github.com/huggingface/pytorch-transformers Model MultiWOZ 2.1 Our Model 58.55 - state transition prediction 57.86 (-0.69) - adaptive objective fine-tuning 57.45 (-1.10) - above two (only CHAN)† 57.00 (-1.55) Our Model (FL (α=1,γ=2))‡ 58.10 (-0.45) Table 4: The ablation study of the state transition prediction and the adaptive objective on the MultiWOZ 2.1 test set with joint accuracy (%). † means removing above two modules and remaining CHAN only. ‡ means fine-tuning with focal loss instead. that on MultiWOZ 2.0. This is probably because MultiWOZ 2.1 fixes lots of notation errors in MultiWOZ 2.0 and our model can benefit more from more accurate relevant context. 4.2 Ablation Study As shown in Table 4, we estimate the effectiveness of the proposed state transition prediction and adaptive objective on the MultiWOZ 2.1 test set. The results show that both state transition prediction task and adaptive objective can boost the performance. Removing the state transition prediction task reduces joint accuracy by 0.69%, and the joint accuracy decreases by 1.10% without the adaptive objective fine-tuning. Moreover, when we remove the state transition prediction task and don’t fine-tune our model with adaptive objective (only CHAN remains), the joint accuracy decreases by 1.55%. Also, to explore the importance of adjusting the αs adaptively, we replace the adaptive objective with original focal loss (α = 1, γ = 2), which leads to 0.45% drop. To prove the effectiveness of each module of the proposed CHAN, we conduct ablation experiments RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.563. 6328 Dialogue Example U: i am looking for a cheap restaurant in the center of the city. U: no, i ' m not picky as long as the prices are low. R: do you have any specific type of food you would like? U: yes please, for 8 people at 18 : 30 on thursday. R: there is a cheap chinese restaurant called the dojo noodle bar located in the centre of town. would you like to book a table? U: can you try to book it at 17 : 30 . R: i am sorry but dojo noodle bar is solidly booked at that time. i can try a different time or day for you. R: all set. your reference number is k2bo09vq. U: thanks. i ' m also looking for some entertainment close to the restaurant. any suggestions? Turn 1, Turn 2, Turn 3, Turn 4, Turn 5, … Restaurant - name (dojo noodle bar) Slot-turn Attention Slot-word Attention Turn 3 Turn 4 Turn 5 Restaurant - name (dojo noodle bar) Restaurant - name (dojo noodle bar) Figure 2: The turn-level and word-level attention visualization of our model on an example from MultiWOZ 2.1 test set, which is predicting the value of slot “restaurant-name” at the 5th turn. The columns “0,1,2,3” are the index of each head of multi-head attention. Although there is no slot-related information at 5th turn, our model still makes the correct prediction by attending to historiacal relevant words “dojo noodle bar” and relevant turns {3,4}, which is highlighted in red. Best viewed in color. on the MultiWOZ 2.1 test set as shown in Table 5. We observe that a slight joint accuracy drop of 0.24% after removing the global-local fusion gate, which proves the effectiveness of fusing global context and local utterances. Moreover, removing the slot-turn attention and context encoder leads to a decrease by 0.15% and 1.72% respectively, which demonstrates that the turn-level relevant information and the contextual representations of wordlevel relevant information are effective to improve the performance. Moreover, after we remove the aforementioned three modules and sum the wordlevel relevant information of {1, · · · , t} turns as output, the joint accuracy reduces by 6.72%, which is much higher than the sum of above three reductions. It demonstrates that effectively modeling interactions with word-level relevant information of dialogue history is crucial for DST. 4.3 Attention Visualization Figure 2 shows the visualization of turn-level and word-level attention of the “restaurant-name” slot on a prediction example of our model at turn 5. The turn-level attention visualization indicates that our model attends to the turns {3, 4} that are semantically related to the given slots “restaurant-name” Model MultiWOZ 2.1 CHAN 57.00 - global-local fusion gate 56.76 (-0.24) - slot-turn attention 56.85 (-0.15) - context encoder 55.28 (-1.72) - above three† 50.28 (-6.72) Table 5: The ablation study of the CHAN on the MultiWOZ 2.1 test set with joint accuracy (%). † means removing above three modules and summing the wordlevel relevant information of {1, · · · , t} turns as output. while almost pays no attention to turns {1,2}. And from the word-level attention visualization, we can easily find that the “restaurant-name” slot attends to the “dojo noodle bar” with the highest weight in both turn 3 and turn 4. Although there is no slot-related information at turn 5, our model still makes the correct decision by exploiting relevant context from the historical dialogue. 4.4 Effects of Adaptive Obj. on Acc. per Slot As Figure 3 shows, we draw the accuracy changes of each slot on MultiWOZ 2.1 test set after finetuning our model with adaptive objective. We sort all slots in ascending order according to their freRETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.563. 6329 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 taxi-arrive-by taxi-leave-at taxi-departure taxi-destination attraction-name train-book-people restaurant-name train-arrive-by train-leave-at hotel-internet hotel-parking hotel-name hotel-book-stay hotel-book-people hotel-book-day restaurant-book-time restaurant-book-day restaurant-book-people hotel-stars attraction-area hotel-price-range hotel-type attraction-type hotel-area restaurant-price-range restaurant-area train-day train-departure train-destination restaurant-food Figure 3: The accuracy changes (%) of each slot on the MultiWOZ 2.1 test set after fine-tuning with adaptive objective. We sort all slots in ascending order according to their frequency (Please refer to Appendix for detailed accuracy results). quency (The detailed accuracy results are in the Appendix). Thus, slots on the left side are relatively more difficult than slots on the right side. After fine-tuning with the adaptive objective, most slots on the left side achieve significant improvements, which proves the adaptive objective can encourage the learning of the hard slots. Although adaptive objective tends to decrease the weight of slots on the right side, they also benefit from the fine-tuning. We think that this is because encouraging the optimizing of hard slots enhances our model by tracking more complicated dialogue states. It proves that our adaptive objective can not only improve the performance of relatively hard slots but also boost the performance of relatively easy slots. 4.5 Qualitative Analysis To explore the advantages of our model compared to baseline models, we conduct a human evaluation on a subset of the MultiWOZ 2.1 test set where our model makes correct predictions while SUMBT (a previous strong baseline) fails. We predefine three types of improvements: historical information inference improvement which means inferring historical information is necessary for correct decisions, current information inference improvement which means inferring current information is enough for correct decisions, and other improvements. As shown in Table 6, 64.49% improvements come from historical information inference, which demonstrates that our model can better exploit relevant context from the dialogue history. 5 Related Work Traditional statistical dialogue state tracking models combine semantics extracted by spoken lanImprovement Type Percentage Historical Information Inference Improvement 64.49% Current Information Inference Improvement 34.86% Others 0.65% Table 6: Qualitative analysis on the improvements of our model compared to a previous strong baseline SUMBT. It is evaluated by human on a subset of MultiWOZ 2.1 test set where our model makes correct predictions while SUMBT fails. guage understanding modules to predict the current dialogue state (Williams and Young, 2007; Thomson and Young, 2010; Wang and Lemon, 2013; Williams, 2014) or to jointly learn speech understanding (Henderson et al., 2014b; Zilka and Jurcicek, 2015; Wen et al., 2017). One drawback is that they rely on hand-crafted features and complex domain-specific lexicons besides the ontology, and they are hard to extend and scale to new domains. Recent neural network models are proposed for further improvements (Mrkˇsi´c et al., 2015; Hori et al., 2016; Mrkˇsi´c et al., 2017; Lei et al., 2018; Xu and Hu, 2018; Zhong et al., 2018; Nouri and Hosseini-Asl, 2018; Wu et al., 2019; Ren et al., 2019; Balaraman and Magnini, 2019). Ren et al. (2018) and Lee et al. (2019) use an RNN to encode the slot-related information of each turn, where slots can not attend to relevant information of past turns directly. Sharma et al. (2019) employ a heuristic rule to extract partial dialogue history and then integrate the historical information into prediction in a coarse manner. Goel et al. (2019) encode the dialogue history into a hidden state and then simply combine it with the slot to make decisions. These models are deficient in fully exploiting the relevant context in dialogue history. Gao et al. (2019b) introduce a slot carryover model to decide whether the values from the previous turn should be used or not and Kim et al. (2019) introduce a state operation predictor to decide the operation with the previous state. Different from them, we consider the state transition prediction as an additional enhancement while they integrate it into their DST pipelines. Besides, Zhong et al. (2018) only employ local modules to model the slot-specific representations, which neglects the slot imbalance problem. The general backbone of our model is a hierarchical attention network that can effectively aggregate query-related information at multiple levels (Yang RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.563. 6330 et al., 2016; Ying et al., 2018; Wang et al., 2018; Xing et al., 2018; Aujogue and Aussem, 2019; Naik et al., 2018; Liu and Chen, 2019). 6 Conclusion We introduce an effective model that consists of a contextual hierarchical attention network to fully exploit relevant context from dialogue history and an adaptive objective to alleviate the slot imbalance problem in dialogue state tracking. Experimental results show that our model achieves state-of-theart performance of 52.68% and 58.55% joint accuracy with considerable improvements (+1.24% and +5.98%) over previous best results on MultiWOZ 2.0 and MultiWOZ2.1 datasets, respectively. Although our model is based on predefined ontology, it is universal and scalable to unseen domains, slots and values. The main contributions of our model, CHAN and adaptive objective, can also be applied to open-vocabulary models. We will explore it in the future. Acknowledgments We thank the anonymous reviewers for their insightful comments. This work was supported by National Key R&D Program of China (NO. 2017YFE0192900). References Jean-Baptiste Aujogue and Alex Aussem. 2019. Hierarchical recurrent attention networks for contextaware education chatbots. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE. Vevake Balaraman and Bernardo Magnini. 2019. Scalable neural dialogue state tracking. arXiv preprint arXiv:1910.09942. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I˜nigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. Multiwoz-a largescale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026. Guan-Lin Chao and Ian Lane. 2019. Bert-dst: Scalable end-to-end dialogue state tracking with bidirectional encoder representations from transformer. Proc. Interspeech 2019, pages 1468–1472. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186. Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyag Gao, and Dilek HakkaniTur. 2019. Multiwoz 2.1: Multi-domain dialogue state corrections and state tracking baselines. arXiv preprint arXiv:1907.01669. Jianfeng Gao, Michel Galley, Lihong Li, et al. 2019a. Neural approaches to conversational ai. Foundations and Trends R⃝in Information Retrieval, 13(23):127–298. Shuyang Gao, Abhishek Sethi, Sanchit Agarwal, Tagyoung Chung, Dilek Hakkani-Tur, and Amazon Alexa AI. 2019b. Dialog state tracking: A neural reading comprehension approach. In 20th Annual Meeting of the Special Interest Group on Discourse and Dialogue, page 264. Rahul Goel, Shachi Paul, and Dilek Hakkani-T¨ur. 2019. Hyst: A hybrid approach for flexible and accurate dialogue state tracking. Proc. Interspeech 2019. Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014a. The second dialog state tracking challenge. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263–272. Matthew Henderson, Blaise Thomson, and Steve Young. 2014b. Word-based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 292–299. Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Takaaki Hori, Hai Wang, Chiori Hori, Shinji Watanabe, Bret Harsham, Jonathan Le Roux, John R Hershey, Yusuke Koji, Yi Jing, Zhaocheng Zhu, et al. 2016. Dialog state tracking with attention-based sequenceto-sequence learning. In 2016 IEEE Spoken Language Technology Workshop (SLT), pages 552–558. IEEE. Sungdong Kim, Sohee Yang, Gyuwan Kim, and SangWoo Lee. 2019. Efficient dialogue state tracking by selectively overwriting memory. arXiv preprint arXiv:1911.03906. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Hung Le, Doyen Sahoo, Chenghao Liu, Nancy F. Chen, and Steven C.H. Hoi. 2020a. End-to-end multidomain task-oriented dialogue systems with multilevel neural belief tracker. In OpenReview. RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.563. 6331 Hung Le, Richard Socher, and Steven C.H. Hoi. 2020b. Non-autoregressive dialog state tracking. In International Conference on Learning Representations. Hwaran Lee, Jinsik Lee, and Tae-Yoon Kim. 2019. Sumbt: Slot-utterance matching for universal and scalable belief tracking. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5478–5483. Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1437–1447. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll´ar. 2017. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980– 2988. Zhengyuan Liu and Nancy Chen. 2019. Reading turn by turn: Hierarchical attention architecture for spoken dialogue comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5460–5466. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Blaise Thomson, Milica Gasic, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2015. Multidomain dialog state tracking using recurrent neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 794–799. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1777–1788. Chetan Naik, Arpit Gupta, Hancheng Ge, Mathias Lambert, and Ruhi Sarikaya. 2018. Contextual slot carryover for disparate schemas. Proc. Interspeech 2018, pages 596–600. Elnaz Nouri and Ehsan Hosseini-Asl. 2018. Toward scalable neural dialogue state tracking model. In Proceedings of NeurIPS 2018, 2nd Conversational AI workshop. Liliang Ren, Jianmo Ni, and Julian McAuley. 2019. Scalable and accurate dialogue state tracking via hierarchical sequence generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1876–1885. Liliang Ren, Kaige Xie, Lu Chen, and Kai Yu. 2018. Towards universal dialogue state tracking. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2780– 2786. Sanuj Sharma, Prafulla Kumar Choubey, and Ruihong Huang. 2019. Improving dialogue state tracking by discerning the relevant context. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 576–581. Blaise Thomson and Steve Young. 2010. Bayesian update of dialogue state: A pomdp framework for spoken dialogue systems. Computer Speech & Language, 24(4):562–588. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Wei Wang, Ming Yan, and Chen Wu. 2018. Multigranularity hierarchical attention fusion networks for reading comprehension and question answering. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1705–1714. Zhuoran Wang and Oliver Lemon. 2013. A simple and generic belief tracking mechanism for the dialog state tracking challenge: On the believability of observed information. In Proceedings of the SIGDIAL 2013 Conference, pages 423–432. Tsung-Hsien Wen, David Vandyke, Nikola Mrkˇsi´c, Milica Gasic, Lina M Rojas Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A networkbased end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 438–449. Jason Williams, Antoine Raux, Deepak Ramachandran, and Alan Black. 2013. The dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference, pages 404–413. Jason D Williams. 2014. Web-style ranking and slu combination for dialog state tracking. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 282–291. Jason D Williams and Steve Young. 2007. Partially observable markov decision processes for spoken dialog systems. Computer Speech & Language, 21(2):393–422. Chien-Sheng Wu, Andrea Madotto, Ehsan HosseiniAsl, Caiming Xiong, Richard Socher, and Pascale RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.563. 6332 Fung. 2019. Transferable multi-domain state generator for task-oriented dialogue systems. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 808–819. Chen Xing, Yu Wu, Wei Wu, Yalou Huang, and Ming Zhou. 2018. Hierarchical recurrent attention network for response generation. In Thirty-Second AAAI Conference on Artificial Intelligence. Puyang Xu and Qi Hu. 2018. An end-to-end approach for handling unknown slot values in dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1448–1457. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, pages 1480–1489. Haochao Ying, Fuzhen Zhuang, Fuzheng Zhang, Yanchi Liu, Guandong Xu, Xing Xie, Hui Xiong, and Jian Wu. 2018. Sequential recommender system based on hierarchical attention networks. In the 27th International Joint Conference on Artificial Intelligence. Steve Young. 2002. Talking to machines (statistically speaking). In Seventh International Conference on Spoken Language Processing. Steve Young, Milica Gaˇsi´c, Blaise Thomson, and Jason D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160–1179. Jian-Guo Zhang, Kazuma Hashimoto, Chien-Sheng Wu, Yao Wan, Philip S Yu, Richard Socher, and Caiming Xiong. 2019. Find or classify? dual strategy for slot-value predictions on multi-domain dialog state tracking. arXiv preprint arXiv:1910.03544. Victor Zhong, Caiming Xiong, and Richard Socher. 2018. Global-locally self-attentive encoder for dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1458– 1467. Li Zhou and Kevin Small. 2019. Multi-domain dialogue state tracking as dynamic knowledge graph enhanced question answering. arXiv preprint arXiv:1911.06192. Lukas Zilka and Filip Jurcicek. 2015. Incremental lstmbased dialog state tracker. In 2015 Ieee Workshop on Automatic Speech Recognition and Understanding (Asru), pages 757–762. IEEE. RETRACTED This paper was retracted. For more information, see https://aclanthology.org/2020.acl-main.563. 6333 A Slot Imbalance Figure 4 shows the relationships between frequency and accuracy of slots (left) and slot-value pairs (right). Because the frequency will be the same for all slots if we consider “none” as well, we calculate accuracy with “none” value excluded for slots. Overall, the more the frequency, the higher the accuracy. It demonstrates that the slot imbalance problem results in different learning difficulties for different slots. Moreover, the slot imbalance problem makes some slots hard to learn and hence hurts the accuracy, which limits the overall performance. Figure 4: The relationships between frequency and accuracy of slots (left) and slot-value pairs (right). Because the frequency will be the same for all slots if we consider “none” as well, we calculate accuracy with “none” value excluded for slots. B Acc. per Slot on MultiWOZ 2.1 Testset Domain-Slot Frequency Our Model without Our Model ∆ adaptive objective taxi-arrive by 1794 99.13 99.25 0.13 taxi-leave at 2165 99.14 99.27 0.13 taxi-departure 4037 98.12 98.37 0.25 taxi-destination 4108 98.1 98.26 0.17 attraction-name 5843 94.16 94.18 0.02 train-book people 6178 97.72 97.76 0.05 restaurant-name 7293 93.67 93.78 0.11 train-arrive by 7488 97.97 97.99 0.02 train-leave at 7563 96.05 96.22 0.16 hotel-internet 8012 97.26 97.16 -0.09 hotel-parking 8179 97.28 97.14 -0.13 hotel-name 8621 95.41 95.52 0.11 hotel-book stay 8715 99.44 99.46 0.01 hotel-book people 8734 99.35 99.28 -0.07 hotel-book day 8745 99.28 99.28 0 restaurant-book time 8958 99.15 99.3 0.16 restaurant-book day 9021 99.31 99.35 0.04 restaurant-book people 9026 99.35 99.35 0 hotel-stars 9330 98.31 98.41 0.1 attraction-area 9766 98.03 98.03 0 hotel-price range 9793 98.69 98.6 -0.09 hotel-type 10110 93.62 94.02 0.41 attraction-type 10525 97.26 97.39 0.12 hotel-area 10885 97.53 97.67 0.15 restaurant-price range 14410 97.66 97.84 0.18 restaurant-area 14741 97.68 97.86 0.19 train-day 15384 99.43 99.42 -0.01 train-departure 15672 98.42 98.48 0.06 train-destination 15951 98.63 98.7 0.07 restaurant-food 16095 97.54 97.61 0.06 Table 7: The detailed results of accuracy (%) per slot before and after fine-tuning our model with adaptive objective on MultiWOZ 2.1 test set. We sort them in ascending order according to their frequency. ∆means the changes of accuracy after fine-tuning. RETRACTED
2020
563
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6334–6343 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6334 Data Manipulation: Towards Effective Instance Learning for Neural Dialogue Generation via Learning to Augment and Reweight Hengyi Cai†,§∗, Hongshen Chen‡ Yonghao Song†, Cheng Zhang†, Xiaofang Zhao† and Dawei Yin†† †Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China §University of Chinese Academy of Sciences, Beijing, China ‡Data Science Lab, JD.com, China ††Baidu Inc., China [email protected], [email protected], {songyonghao, zhangcheng, zhaoxf}@ict.ac.cn, [email protected] Abstract Current state-of-the-art neural dialogue models learn from human conversations following the data-driven paradigm. As such, a reliable training corpus is the crux of building a robust and well-behaved dialogue model. However, due to the open-ended nature of human conversations, the quality of user-generated training data varies greatly, and effective training samples are typically insufficient while noisy samples frequently appear. This impedes the learning of those data-driven neural dialogue models. Therefore, effective dialogue learning requires not only more reliable learning samples, but also fewer noisy samples. In this paper, we propose a data manipulation framework to proactively reshape the data distribution towards reliable samples by augmenting and highlighting effective learning samples as well as reducing the effect of inefficient samples simultaneously. In particular, the data manipulation model selectively augments the training samples and assigns an importance weight to each instance to reform the training data. Note that, the proposed data manipulation framework is fully data-driven and learnable. It not only manipulates training samples to optimize the dialogue generation model, but also learns to increase its manipulation skills through gradient descent with validation samples. Extensive experiments show that our framework can improve the dialogue generation performance with respect to various automatic evaluation metrics and human judgments. 1 Introduction Open-domain dialogue generation, due to its potential applications, is becoming ubiquitous in the community of natural language processing. Current end-to-end neural dialogue generation models (Li et al., 2016; Serban et al., 2017; Zhao et al., ∗Work done at Data Science Lab, JD.com. inefficient conversations augmentation reweighting effective conversations augmented effective conversations Figure 1: Data manipulation helps the dialogue model training by augmenting and highlighting effective learning samples as well as reducing the weights of inefficient samples. 2017) are primarily built following the data-driven paradigm, that is, these models mimic the human conversations by training on the large-scale queryresponse pairs. As such, a reliable training corpus that exhibits high-quality conversations is the crux of building a robust and well-behaved dialogue model. Unfortunately, owing to the subjectivity and open-ended nature of human conversations, the quality of the collected human-generated dialogues varies greatly (Shang et al., 2018), which hampers the effectiveness of data-driven dialogue models: 1) Effective conversation samples are quite insufficient. To glean some insights on the data quality of dialogue corpus, we choose the queryrelatedness to take a glimpse of the data quality. In dialogue corpus, some conversations are quite coherent, where the queries and responses are wellcorrelated, while others are not. Query-relatedness measures the semantic similarities between the query and its corresponding response in the embedding space and ranges from 0 to 1. When reviewing DailyDialog (Li et al., 2017), we find that only 12% conversation samples are of relatively high queryrelatedness scores (> 0.6). Without adequate reliable training samples, the neural dialogue model is prone to converge to a sub-optimal point. 2) Meanwhile, noisy and even meaningless conversa6335 tion samples frequently appear. As Li et al. (2016) reported, “I don’t know” appears in over 113K sentences in the training corpus OpenSubtitles (Lison and Tiedemann, 2016). Such kind of noisy conversation data prevails in neural dialogue model training, and vitally impedes the model learning. Therefore, effective dialogue learning requires not only more reliable learning samples, but also fewer noisy samples. In this work, as illustrated in Figure 1, we propose a novel learnable data manipulation framework to proactively reshape the data distribution towards reliable samples by augmenting and highlighting effective learning samples as well as reducing the weights of inefficient samples simultaneously. Specifically, to generate more effective data samples, the data manipulation model selectively augments the training samples in terms of both word level and sentence level, using masked language models such as BERT (Devlin et al., 2019) and back-translation (Sennrich et al., 2016) technique. To reduce the weights of inefficient samples from the original training samples and the augmented samples, the data manipulation model assigns an importance weight to each sample to adapt the sample effect on dialogue model training. It gives out higher importance weights to critical learning samples and lower weights to those inefficient samples. Furthermore, different from most previous data augmentation or data weighting studies (Li et al., 2019; Shang et al., 2018; Cs´aky et al., 2019), which are unaware of the target model states during augmentation or weighting, our data manipulation framework not only manipulates training samples to optimize the dialogue generation model, but also learns to increase its manipulation skills through gradient descent with validation samples. We apply the proposed data manipulation framework on several state-of-the-art generation models with two real-life open-domain conversation datasets and compare with the recent data manipulation approaches in terms of 13 automatic evaluation metrics and human judgment. Experiment results show that our data manipulation framework outperforms the baseline models over most of the metrics on both datasets. 2 Data Manipulation for Neural Dialogue Generation The proposed data manipulation framework tackles the problem of un-even quality data by inducing the Data Manipulation Training Data manipulated batch samples original batch samples Validation Performance Dialogue Model backward update Figure 2: Overview of the proposed automated data manipulation framework for neural dialogue generation. At training step t, the data manipulation model augments and weights the training samples for the dialogue model learning. model learning from more effective dialogue samples and reducing effects of those inefficient samples simultaneously. In particular, as illustrated in Figure 2, it manipulates and reshapes the data distribution for neural dialogue model learning in mainly three stages: First, each batch of training samples are selectively augmented to generate more variant samples; and then, all the samples, including the original samples and the augmented samples, are assigned with instance weights indicating their importance regarding current learning status; finally, the weighted samples are fed into the neural dialogue model to induce the model learning from more effective training instances. Note that, although we describe the framework in three components for ease of understanding, in fact, the whole framework can be trained in an endto-end manner. As a result, the data manipulation network is capable of not only manipulating training samples to optimize the dialogue generation model, but also learning to increase its manipulation skills through gradient descent with validation samples. We first introduce the augmentation and weighting strategies for data manipulation in §2.1 and §2.2, and then describe how the neural dialogue generation model learns from the manipulated samples in §2.3. Parameters estimation for the data manipulation model is elaborated in §2.4. 2.1 Dialogue Augmentation To induce the neural dialogue generation model to learn from more effective samples, we develop a gated data augmentation mechanism for the manipulation framework to selectively augment the learning samples. Specifically, as shown in Figure 3, given a training sample, the manipulation framework first specifies whether to augment it or not through an in6336 Data Manipulation original batch samples Word-level Augmentation · · · <latexit sha1_base64="vqpAnr4xaGqQu1ehF84ZE4exar0=">ACyXicjVHLSsNAFD2Nr1pfVZdugk VwVZIq6LoRnBTwT6gLZJMpzU2L5OJWIsrf8Ct/pj4B/oX3hmnoBbRCUnOnHvPmbn3urHvpcKyXnPGzOzc/EJ+sbC0vLK6VlzfaKRljBeZ5EfJS3XSbnvhbwuPOHzVpxwJ3B93nSHxzLevOFJ6kXhuRjFvBs4g9Dr e8wRDU6rBeJ9KJYsqWuY0sDUoQa9aVHxBz1EYMgQgCOEIOzDQUpPGzYsxMR1MSYuIeSpOMc9CqTNKItThkPskL4D2rU1G9JeqZKzegUn96ElCZ2SBNRXkJYnmaqeKacJfub91h5yruN6O9qr4BYgUti/9JNMv+ rk7UI9HGoavCoplgxsjqmXTLVFXlz80tVghxi4iTuUTwhzJRy0mdTaVJVu+yto+JvKlOycs90boZ3eUsasP1znNOgUSnbe+XK2X6peqRHncWtrFL8zxAFSeoU7eV3jE56NU+PauDXuPlONnNZs4tsyHj4A8b+Rt A=</latexit> · · · <latexit sha1_base64="vqpAnr4xaGqQu1ehF84ZE4exar0=">ACyXicjVHLSsNAFD2Nr1pfVZdugk VwVZIq6LoRnBTwT6gLZJMpzU2L5OJWIsrf8Ct/pj4B/oX3hmnoBbRCUnOnHvPmbn3urHvpcKyXnPGzOzc/EJ+sbC0vLK6VlzfaKRljBeZ5EfJS3XSbnvhbwuPOHzVpxwJ3B93nSHxzLevOFJ6kXhuRjFvBs4g9Dr e8wRDU6rBeJ9KJYsqWuY0sDUoQa9aVHxBz1EYMgQgCOEIOzDQUpPGzYsxMR1MSYuIeSpOMc9CqTNKItThkPskL4D2rU1G9JeqZKzegUn96ElCZ2SBNRXkJYnmaqeKacJfub91h5yruN6O9qr4BYgUti/9JNMv+ rk7UI9HGoavCoplgxsjqmXTLVFXlz80tVghxi4iTuUTwhzJRy0mdTaVJVu+yto+JvKlOycs90boZ3eUsasP1znNOgUSnbe+XK2X6peqRHncWtrFL8zxAFSeoU7eV3jE56NU+PauDXuPlONnNZs4tsyHj4A8b+Rt A=</latexit> why are you deciding to go abroad choosing wanting preferring overseas away outside Weighting Sentence-level Augmentation · · · <latexit sha1_base64="vqpAnr4xaGqQu1ehF84ZE4exar0=">ACyXicjVHLSsNAFD2Nr1pfVZdugk VwVZIq6LoRnBTwT6gLZJMpzU2L5OJWIsrf8Ct/pj4B/oX3hmnoBbRCUnOnHvPmbn3urHvpcKyXnPGzOzc/EJ+sbC0vLK6VlzfaKRljBeZ5EfJS3XSbnvhbwuPOHzVpxwJ3B93nSHxzLevOFJ6kXhuRjFvBs4g9Dr e8wRDU6rBeJ9KJYsqWuY0sDUoQa9aVHxBz1EYMgQgCOEIOzDQUpPGzYsxMR1MSYuIeSpOMc9CqTNKItThkPskL4D2rU1G9JeqZKzegUn96ElCZ2SBNRXkJYnmaqeKacJfub91h5yruN6O9qr4BYgUti/9JNMv+ rk7UI9HGoavCoplgxsjqmXTLVFXlz80tVghxi4iTuUTwhzJRy0mdTaVJVu+yto+JvKlOycs90boZ3eUsasP1znNOgUSnbe+XK2X6peqRHncWtrFL8zxAFSeoU7eV3jE56NU+PauDXuPlONnNZs4tsyHj4A8b+Rt A=</latexit> · · · <latexit sha1_base64="vqpAnr4xaGqQu1ehF84ZE4exar0=">ACyXicjVHLSsNAFD2Nr1pfVZdugk VwVZIq6LoRnBTwT6gLZJMpzU2L5OJWIsrf8Ct/pj4B/oX3hmnoBbRCUnOnHvPmbn3urHvpcKyXnPGzOzc/EJ+sbC0vLK6VlzfaKRljBeZ5EfJS3XSbnvhbwuPOHzVpxwJ3B93nSHxzLevOFJ6kXhuRjFvBs4g9Dr e8wRDU6rBeJ9KJYsqWuY0sDUoQa9aVHxBz1EYMgQgCOEIOzDQUpPGzYsxMR1MSYuIeSpOMc9CqTNKItThkPskL4D2rU1G9JeqZKzegUn96ElCZ2SBNRXkJYnmaqeKacJfub91h5yruN6O9qr4BYgUti/9JNMv+ rk7UI9HGoavCoplgxsjqmXTLVFXlz80tVghxi4iTuUTwhzJRy0mdTaVJVu+yto+JvKlOycs90boZ3eUsasP1znNOgUSnbe+XK2X6peqRHncWtrFL8zxAFSeoU7eV3jE56NU+PauDXuPlONnNZs4tsyHj4A8b+Rt A=</latexit> why are you deciding to go abroad why are you choosing to leave for a foreign country (a) (b) Filter Filter augmented batch samples original batch samples weights manipulated batch samples original batch samples manipulated batch samples original word predicted word Figure 3: Illustration of the data manipulation model. During training, it takes the original batch samples as input, and generates the augmented data samples as well as the importance weights for dialogue model training. stance filter, which can be implemented using a sigmoid gating function. Then, two levels of data augmentation are introduced, word-level contextual augmentation and sentence-level data augmentation, to augment the chosen sample accordingly. 2.1.1 Word-level Contextual Augmentation As the name suggests, word-level augmentation enriches the training samples by substituting the words in the original sample (Figure 3 (a)). Here, we employ a masked language model, BERT (Devlin et al., 2019), to implement word-level augmentation. Given an original sentence, the language model first randomly masks out a few words. BERT then takes in the masked sentence and predicts the corresponding masked positions with new words. A fixed pre-trained BERT may not generalize well for our data manipulation framework, because BERT is unaware of the dialogue learning status. To mitigate such defects, we further fine-tune BERT through backpropagation (more details in § 2.4). In particular, BERT is adapted to be differentiable by utilizing a gumbel-softmax approximation (Jang et al., 2017) when predicting substitution words. 2.1.2 Sentence-level Data Augmentation Word-level data augmentation is quite straightforward. However, such kind of rewriting is limited to only a few words. In human dialogues, there exist various synonymous conversations with different sentence structures. To further diversify the expressions in conversion, we introduce the sentence-level data augmentation through backtranslation as in Edunov et al. (2018); Yu et al. (2018), which trains two translation models: one translation model from the source language to target language and another backward translation model from the target to the source, as shown in Figure 3 (b). By transforming the expression styles across different languages, the augmented training samples are expected to convey similar information while with different expressions. Similar to the fine-tuning strategy in word-level data augmentation, we also fine-tune the sentencelevel data augmentation components to encourage the model to generate more effective samples for dialogue training. The gradients are backpropagated into the translation-based augmentation model, where a differentiable gumbel-softmax is utilized when predicting sentences using the translation model. 2.2 Data Weighting Given the original training samples and the augmented samples, to deal with the problem of noisy instances, data manipulation model assigns an importance weight to each training sample regarding the learning status. In particular, the sample importance weights are approximated through a softmax function over the scores of these instances. A multilayer perceptron is employed to compute example scores, taking distributional representations of these instances as input. Each sample is converted into its corresponding distributional representation through a transformer-based encoder. 6337 2.3 Dialogue Generation with Data Manipulation Conventionally, neural dialogue generation model is optimized with a vanilla negative log-likelihood loss using the training data D with size N: Lvanilla = PN j=1 −log p(yj|xj), where each sample is treated equally. In our framework, we assign each sample with an importance weight and augment the original training set D = {(xj, yj)}N j=1 to D′ = {(xj, yj)}N′ j=1 regarding the learning status. To perform the weighted optimization with augmented training set D′, we utilize a weighted negative log-likelihood loss function: Ldm = N′ X j=1 −wj log p(yj|xj), (1) where wj is the instance weight produced by the data manipulation network. 2.4 Parameter Estimation for Data Manipulation The data manipulation network not only manipulates training samples to optimize the dialogue learning process, but also learns to increase its manipulation skills through gradient descent with validation samples. We formulate such joint learning process following a novel policy learning paradigm (Hu et al., 2019; Tan et al., 2019), where the manipulation framework is formulated as a learnable data-dependent reward function Rφ(d = {x, y}|D), the dialogue model pθ(y|x) is treated as a policy, the input x as the “state”, and the output y as the “action”. The reward function Rφ(d|D) is defined as: Rφ(d|D) =    wi if d is an augmented sample of d∗ i or d = d∗ i , d∗ i ∈D −∞ otherwise, (2) where φ denotes the parameter of data manipulation network and wi ∈R is the importance weight associated with the ith data sample. In such formulation, a sample d receives a real-valued reward when d is an augmented sample, or d matches an instance in the original training set. As depicted in Algorithm 1, the parameter θ of the neural dialogue model and parameter φ of the data manipulation network are alternatively optimized. Jointly optimizing the dialogue model and the manipulation network can be regarded as reward learning, where the policy pθ(y|x) receives relatively higher rewards for effective samples and Algorithm 1 Joint Learning of Dialogue Model and Data Manipulation Network Input: The dialogue model θ, data manipulation network φ, training set D and validation set Dv 1: Initialize dialogue model parameter θ and data manipulation model parameter φ 2: repeat 3: Optimize θ on D enriched with data manipulation. 4: Optimize φ by maximizing data log-likelihood on Dv. 5: until convergence Output: Learned dialogue model θ∗and data manipulation model φ∗ lower rewards for those inefficient samples. More concretely, to optimize the neural dialogue model, at each iteration, mini-batch instances are sampled from the training set, and are then enriched through augmentation and weighting. The parameter θ of the neural dialogue model is then updated with a weighted negative log-likelihood loss function in Eq.(1): θ ′ = θ −α∇θLdm(θ, φ), (3) where ∇θLdm(θ, φ) is the gradient of θ with respect to the loss Ldm, and α is the step size. The parameter φ of the data manipulation network is learned by taking a meta gradient descent step on validation samples (Ren et al., 2018). Equation (3) shows that θ ′ depends on φ. Therefore, the manipulation model (i.e. the reward function Rφ(d|D)) can be optimized by directly backpropagating the gradient through θ ′ to φ. 3 Experiments Dataset Train Valid Test DailyDialog 54,889 6,005 5,700 OpenSubtitles 64,000 8,000 8,000 Table 1: Data statistics of the experiment corpora. 3.1 Experiment Setup Data We conduct experiments on two English conversation datasets: (1) DailyDialog (Li et al., 2017), a collection of real-world dialogues widely used in open-domain dialogue generation. This is a multi-turn dataset, and we treat each turn as a training pair in this work. The overlapping pairs are removed from the data set. (2) OpenSubtitles (Lison and Tiedemann, 2016), a group of human-human conversations converted from movie transcripts. 6338 80,000 instances are sampled from the original corpus and the data proportion for train/valid/test set is set to 8/1/1, respectively. The dataset statistics are listed in Table 1. Experimental Models To ascertain the effectiveness and applicability of our method, we implement the proposed data manipulation framework on following representative models: (i) SEQ2SEQ: a RNN-based sequence-to-sequence model with attention mechanisms (Bahdanau et al., 2015); (ii) CVAE: a latent variable model using conditional variational auto-encoder, trained with KLannealing and a BoW loss as in Zhao et al. (2017); (iii) Transformer: an encoder-decoder architecture relying solely on the attention mechanisms (Vaswani et al., 2017). Comparison Models We also compare our approach with previous data augmentation or instance weighting methods: (i) CVAE-GAN (Li et al., 2019): a model that combines CVAE and GAN for augmenting the training data to generate more diversified expressions. (ii) Calibration (Shang et al., 2018): a calibration network measures the quality of data samples and enables weighted training for dialogue generation. (iii) Clustering (Cs´aky et al., 2019): it clusters highentropy samples as noises and filters them out. 3.2 Evaluation Metrics We adopt several widely used metrics (Liu et al., 2016; Li et al., 2016; Serban et al., 2017; Gu et al., 2019) to measure the performance of dialogue generation models, including BLEU, embedding-based metrics, entropy-based metrics and distinct metrics. In particular, BLEU measures how much a generated response contains n-gram overlaps with the reference. We compute BLEU scores for n<4 using smoothing techniques1. Embedding-based metric computes the cosine similarity of bag-ofwords embeddings between the hypothesis and the reference. We employ the following three embedding metrics to assess the response quality: (1) Embedding Average (Avg): cosine similarity between two utterances, in which the sentence embedding is computed by taking the average word embedding weighted by the smooth inverse frequency sent emb(e) = 1 |e| P ν∈e 0.001 0.001+p(ν)emb(ν) of words as in Arora et al. (2017). where emb(ν) 1https://www.nltk.org/_modules/nltk/ translate/bleu_score.html and p(ν) are the embedding and the probability2 of word ν respectively. (2) Embedding Greedy (Gre): greedily matching words in two utterances based on the cosine similarities between their embeddings, and averaging the obtained scores, (3) Embedding Extrema (Ext): cosine similarity between the largest extreme values among the word embeddings in the two utterances. We use Glove vectors as the word embeddings. Regarding entropybased metrics, we compute the n-gram entropy Ent-n = −1 |r| P ν∈r log2 p(ν) of responses to measure their non-genericness, where the probabilities p(ν) of n-grams (n=1,2,3) are calculated based on the maximum likelihood estimation on the training data (Serban et al., 2017). Distinct computes the diversity of the generated responses. Dist-n is defined as the ratio of unique n-grams (n=1,2,3) over all n-grams in the generated responses. Following Gu et al. (2019), we also report Intra-{1,2,3} metrics which are computed as the average of distinct values within each sampled response. 3.3 Implementation & Reproducibility For word-level dialogue augmentation, we employ the pre-trained BERT-base language model with the uncased version of tokenizer. We follow the hyper-parameters and settings suggested in Devlin et al. (2019). The replacement probability is set to 15%. For back-translation in sentence-level dialogue augmentation, we use the Transformer model (Vaswani et al., 2017) trained on En-De and En-Ru WMT’19 news translation tasks (Ng et al., 2019). German and Russian sentences were tokenized with the Moses tokenizer (Koehn et al., 2007). The same hyper-parameters are used for the translation tasks, i.e., word representations of size 1024, dropout with 0.8 keep probability, feedforward layers with dimension 4096, 6 blocks in the encoder and decoder with 16 attention heads. Models are optimized with Adam (Kingma and Ba, 2015) optimizer using initial learning rate 7e-4. Regarding dialogue models implementation, we adopt a 2-layer bidirectional LSTM as the encoder and a unidirectional one as the decoder for both the SEQ2SEQ and CVAE. The hidden size is set to 256, and the latent size used in CVAE is set to 64. The transformer model for dialogue generation is configured with 512 hidden size, 8 attention heads and 6 blocks in both the encoder and decoder. The 2Probability is computed based on the maximum likelihood estimation on the training data. 6339 Models Dist-1 Dist-2 Dist-3 Intra-1 Intra-2 Intra-3 Ent-1 Ent-2 Ent-3 BLEU Avg Ext Gre (a) SEQ2SEQ 0.9026 4.2497 8.4039 87.909 94.399 95.971 6.7263 10.381 12.036 0.2160 67.671 47.472 68.349 SEQ2SEQ (⋆) 1.3058 5.8408 11.2820 88.628 94.268 96.171 7.0253 11.018 12.726 0.3619 68.018 47.665 68.708 CVAE 0.9798 4.6095 9.0876 91.848 96.815 98.025 6.9184 10.740 12.365 0.2617 66.935 46.926 68.068 CVAE (⋆) 2.0683 9.0082 17.3260 93.301 97.418 98.323 7.0278 11.078 12.586 0.2954 66.363 46.955 68.424 Transformer 1.3489 5.9736 11.3310 87.725 94.170 95.944 6.9024 10.624 11.941 0.2342 65.305 46.223 67.419 Transformer (⋆) 2.4763 11.6270 21.4520 89.058 96.615 98.248 7.1556 11.320 12.956 0.4163 66.908 46.284 67.656 (b) SEQ2SEQ 0.5695 2.9952 6.2377 96.200 97.754 98.355 6.5996 10.371 12.213 0.0078 55.912 40.320 57.664 SEQ2SEQ (⋆) 0.7285 3.6053 7.2580 95.938 97.829 98.561 6.8391 10.903 13.411 0.0210 58.105 41.113 59.551 CVAE 0.5493 2.9585 6.3159 78.534 90.028 98.864 5.8675 10.089 12.544 0.0019 54.508 41.262 62.139 CVAE (⋆) 1.0883 4.8967 9.7060 95.489 97.579 98.201 6.8952 10.902 12.200 0.0173 56.473 41.678 59.330 Transformer 0.7226 3.8053 8.3877 92.94 94.947 96.023 7.0361 11.091 11.832 0.0050 55.257 41.302 58.232 Transformer (⋆) 1.7264 6.8750 12.5770 94.223 97.204 98.055 7.0493 11.334 12.098 0.0110 55.219 40.701 59.081 Table 2: Automatic evaluation results (%) on (a) DailyDialog and (b) OpenSubtitles. “⋆” denotes that the model is trained using our proposed data manipulation framework. The metrics Average, Extrema and Greedy are abbreviated as Avg, Ext and Gre, respectively. The best results in each group are highlighted with bold. Models Dist-1 Dist-2 Dist-3 Intra-1 Intra-2 Intra-3 Ent-1 Ent-2 Ent-3 BLEU Avg Ext Gre (a) Calibration (Shang et al., 2018) 0.7278 3.2265 6.0570 86.619 91.697 93.753 6.7827 10.439 11.867 0.1876 67.309 47.347 67.886 CVAE-GAN (Li et al., 2019) 0.6996 3.2448 6.4911 85.329 92.804 94.953 6.8184 10.425 12.260 0.2149 68.012 47.079 68.007 Clustering (Cs´aky et al., 2019) 0.6532 3.0747 6.2315 78.612 87.268 91.151 6.8554 10.436 12.358 0.2062 69.040 47.367 68.276 Ours 1.3058 5.8408 11.2820 88.628 94.268 96.171 7.0253 11.018 12.726 0.3619 68.018 47.665 68.708 (b) Calibration (Shang et al., 2018) 0.5107 2.7129 5.6281 95.997 97.590 98.242 6.7281 10.625 12.322 0.0034 58.786 40.850 59.132 CVAE-GAN (Li et al., 2019) 0.5175 2.7843 5.8150 95.303 97.109 98.218 6.9186 10.747 12.592 0.0104 57.610 40.871 58.767 Clustering (Cs´aky et al., 2019) 0.4728 2.6349 5.3878 96.145 97.614 98.317 6.8789 10.869 13.271 0.0124 59.069 41.026 59.343 Ours 0.7285 3.6053 7.2580 95.938 97.829 98.561 6.8391 10.903 13.411 0.0210 58.105 41.113 59.551 Table 3: Performance (%) of our approach instantiated on the naive SEQ2SEQ and the baseline approaches on (a) DailyDialog and (b) OpenSubtitles. hyper-parameters in the baseline models are set following the original papers (Li et al., 2019; Shang et al., 2018; Cs´aky et al., 2019). 3.4 Evaluation Results To investigate the effectiveness and general applicability of the proposed framework, we instantiate our data manipulation framework on several stateof-the-art models for dialogue generation. The automatic evaluation results of our proposed learning framework and the corresponding vanilla models are listed in Table 2. Compared with the vanilla training procedure, the proposed data manipulation framework brings solid improvements for all the three architectures regarding almost all the evaluation metrics. Such improvements are consistent across both two conversation datasets, affirming the superiority and general applicability of our proposed framework. We further compare our model with existing related methods. Not surprisingly, as shown in Table 3, our data manipulation framework outperforms the baseline methods on most of metrics. In particular, the improvement on Distinct metrics of our model is much greater, which implies that data manipulation effectively induce the neural dialogue model generating more diverse responses. Opponent Win Loss Tie Kappa Ours vs. SEQ2SEQ 45% 13% 42% 0.5105 Ours vs. Calibration 40% 9% 51% 0.4208 Ours vs. CVAE-GAN 37% 14% 49% 0.4063 Ours vs. Clustering 41% 12% 47% 0.4893 Table 4: The results of human evaluation on the test set of DailyDialog. 3.5 Human Evaluation We use the DailyDialog as the evaluation corpus since it is more similar to our daily conversations and easier for annotators to make the judgement. Three graduate students are recruited to conduct manual evaluations. 100 test messages are randomly sampled. We present the input messages and the corresponding responses generated by our model and the comparison model to the annotators. The annotators are then required to compare the quality of these two responses (response1, response2), taking the following criteria into consideration: coherence, language consistency, fluency and informativeness, and evaluate among “win” (response1 is better), “loss” (response2 is better) and “tie” (they are equally good or bad). Note that cases with different evaluation results are labeled as “tie”. Table 4 summarizes 6340 Dist-1 Dist-2 Dist-3 Intra-1 Intra-2 Intra-3 Ent-1 Ent-2 Ent-3 BLEU Avg Ext Gre Baseline 0.8570 4.0123 7.9559 88.509 94.727 96.844 6.7783 10.394 11.719 0.2146 65.200 46.355 67.344 w/ word-level augmentation 1.2205 6.0622 12.2620 89.916 95.265 96.627 6.9457 10.920 12.334 0.2657 65.315 46.821 68.025 w/ sentence-level augmentation 1.4702 6.7803 13.0910 91.309 95.772 97.397 7.0260 10.952 12.517 0.2721 66.788 47.464 67.911 Table 5: Ablation test (%) for word-level and sentence-level augmentations. Dist-1 Dist-2 Dist-3 Intra-1 Intra-2 Intra-3 Ent-1 Ent-2 Ent-3 BLEU Avg Ext Gre Full model 2.0515 9.7186 18.9970 91.343 96.446 97.613 7.0858 11.121 12.545 0.3604 66.551 47.325 68.378 w/o weighting 1.8156 8.1939 15.9000 90.747 95.816 97.199 7.0976 11.130 12.731 0.5147 65.675 46.955 68.048 w/o augmentation 1.1456 5.4386 11.1140 86.399 92.293 94.825 6.8752 10.579 11.837 0.2002 64.937 46.540 67.541 w/o instance filter 1.8627 8.2850 15.9400 88.551 93.445 94.419 7.1440 11.305 12.823 0.2813 65.606 46.912 67.863 Table 6: Model ablation test (%) on DailyDialog. 0 0.01 0.02 0.03 Iterations 0.456 0.46 0.464 0.468 0.472 Iterations 30 34 38 42 46 Iterations 11.4 11.8 12.2 12.6 Iterations Distinct Embedding PPL Entropy Training with data manipulation Vanilla training Figure 4: Comparison of the training with data manipulation and vanilla training using SEQ2SEQ on the validation set of DailyDialog. Dist-1, Embedding Extrema and Ent-3 are denoted as “Distinct”, “Embedding” and “Entropy”, respectively. human evaluation results. The kappa scores indicate that the annotators came to a fair agreement in the judgement. Compared with the baseline methods, our data manipulation approach brings about more informative and coherent replies. 3.6 Model Analysis Learning Efficiency Figure 4 presents validation results along iterations when training the SEQ2SEQ model on DailyDialog. We observe that when training SEQ2SEQ using our framework, the initial learning speed is a bit slower than the standard vanilla training. However, our framework surpasses the vanilla training on the final stage. One reason is that, at the early stage, the data manipulation model takes some time to improve its manipulation skills. This may slow down the neural dialogue model learning. Once the manipulation skills are effective enough, the neural dialogue model may benefit from learning more effective samples instead of those inefficient instances, and achieves better performance. Examples with Different Augmentation Frequencies The data manipulation model selectively chooses samples to conduct data augmentation. To further glean the insights regarding which samples are favored by the augmentation model, we list examples with different augmentation frequencies in Figure 5. We notice that samples frequently augmented by the manipulation model are more reliable than those seldom augmented ones. Therefore, the dialogue model is able to learn from those effective instances and their synonymous variants. Word-level vs. Sentence-level Augmentation In our framework, we implement two kinds of augmentation mechanisms. Word-level augmentation enriches the given samples by substituting words, while sentence-level augmentation paraphrases the original samples through back-translation. We evaluate their performances and report results in Table 5. Both augmentation mechanisms improve the performance over the vanilla SEQ2SEQ baseline, while sentence-level augmentation performs slightly better than word-level augmentation on most evaluation metrics. One possible reason is that sentence-level augmentation captures more paraphrasing phenomenon. Ablation Study Table 6 presents the results of model variants, by ablating specific parts of the data manipulation model. Among different variants, without data augmentation, the performance degrades rapidly. Meanwhile, without weighting or instance filter also decreases the performance. This implies that the neural dialogue generation model not only benefits from more training samples but also reaps greater advantages from those effective rather than inefficient instances. 6341 Number of samples XғWhat time are you leaving? YғI’ll leave at ten o'clock. XғI’m afraid of the darkness . YғDon't worry. I'll drive you back. XғWhen do you leave? YғI will leave at ten. XғI fear the dark. YғNo worries, I will drive you back. augmentation XғA vet - a veterinary surgeon. YғGood gracious! what's that? XғAsk me a question? What do you want to know? Yғwell... er... it is just... just that i... augmentation Figure 5: Examples with different augmentation frequencies. Instances with higher augmentation frequencies are more effective than those seldom augmented examples. Distinct (∆) Embedding (∆) Entropy (∆) BLEU (∆) 50% training data 0.8179 (+109.88%) 1.6860 (+2.54%) 0.4910 (+4.20%) 0.0768 (+56.64%) 100% training data 1.0865 (+71.62%) 0.2720 (+0.40%) 0.4750 (+3.90%) 0.1307 (+43.21%) Table 7: Performance improvements regarding different sizes of training data on DailyDialog. Dist-1, Embedding Greedy and Ent-3 are denoted as “Distinct”, “Embedding” and “Entropy”, respectively. Impact of Training Data Scale We explore the impact of training data scale on the data manipulation framework by comparing a model trained on half amount of the training data in DailyDialog. As presented in Table 7, with only 50% amount of training data, our model achieves a greater performance boost, which affirms the effectiveness and robustness of the proposed approach. 4 Related Work Existing approaches to improving neural dialogue generation models mainly target on building more powerful learning systems, using extra information such as conversation topics (Xing et al., 2017), persona profile (Song et al., 2019), user emotions (Zhou et al., 2018), or out-sourcing knowledge (Liu et al., 2018). Another popular framework for dialogue generation is variational autoencoder (Kingma and Welling, 2014; Zhao et al., 2017; Shen et al., 2017), in which a latent variable is introduced to benefit the dialogue model with more diverse response generation. Contrasted with previous researches, we investigate to improve the dialogue model from a different angle, i.e., adapting the training examples using data manipulation techniques. Data augmentation is an effective way to improve the performance of neural models. To name a few, Kurata et al. (2016) propose to generate more utterances by introducing noise to the decoding process. Kobayashi (2018); Wu et al. (2019) demonstrate that contextual augmentation using label-conditional language models helps to improve the neural networks classifier on text classification tasks. Sennrich et al. (2016) boost neural machine translation models using back-translation. Xie et al. (2017); Andreas (2020) design manuallyspecified strategies for data augmentation. Hou et al. (2018) utilize a sequence-to-sequence model to produce diverse utterances for language understanding. Li et al. (2019); Niu and Bansal (2019) propose to generate sentences for dialogue augmentation. Compared with previous augmentation approaches for dialogue generation, augmented sentences in our framework are selectively generated using the pretrained models and the augmentation process is additionally fine-tuned jointly with the training of dialogue generation. Regarding data weighting, past methods (Jiang and Zhai, 2007; Rebbapragada and Brodley, 2007; Wang et al., 2017; Ren et al., 2018; Hu et al., 2019) have been proposed to manage the problem of training set biases or label noises. Lison and Bibauw (2017) propose to enhance the retrieval-based dialog system with a weighting model. Shang et al. (2018) likewise design a matching network to calibrate the dialogue model training through instance weighting. Cai et al. (2020) investigate curriculum learning to adapt the instance effect on dialogue model training according to the sample complexity. Whereas our proposed framework learns to reweight not only the original training examples but also the augmented examples. Another differ6342 ence is that, we directly derive data weights based on their gradient directions on a validation set, instead of separately training a external weighting model. Cs´aky et al. (2019) claim that high-entropy utterances in the training set lead to those boring generated responses and thus propose to ameliorate such issue by simply removing training instances with high entropy. Although data filtering is a straightforward approach to alleviate the problem of noisy data, the informative training samples remain untouched and insufficient. Whereas our method holds the promise of generating more valid training data and alleviating the negative noises in the meantime. Note that either data augmentation or instance reweighting can be considered band-aid solution: simply augmenting all training data risks introducing more noisy conversations as such low-quality examples prevail in human-generated dialogues, whilst adapting the sample effect merely by instance reweighting is also suboptimal since effective training samples remain insufficient. The proposed learning-to-manipulate framework organically integrates these two schemes, which collectively fulfill the entire goal. 5 Conclusion In this work, we consider the automated data manipulation for open-domain dialogue systems. To induce the model learning from effective instances, we propose a learnable data manipulation model to augment effective training samples and reduce the weights of inefficient samples. The resulting data manipulation model is fully end-to-end and can be trained jointly with the dialogue generation model. Experiments conducted on two public conversation datasets show that our proposed framework is able to boost the performance of existing dialogue systems. Our learning-to-manipulate framework for neural dialogue generation is not limited to the elaborately designed manipulation skills in this paper. Future work will investigate other data manipulation techniques (e.g., data synthesis), which can be further integrated to improve the performance. Acknowledgments We would like to thank all the reviewers for their insightful and valuable comments and suggestions. This work is supported by the National Natural Science Foundation of China-Joint Fund for Basic Research of General Technology under Grant U1836111 and U1736106. Hongshen Chen and Xiaofang Zhao are the corresponding authors. References Jacob Andreas. 2020. Good-enough compositional data augmentation. In ACL. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In ICLR. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Hengyi Cai, Hongshen Chen, Cheng Zhang, Yonghao Song, Xiaofang Zhao, Yangxi Li, Dongsheng Duan, and Dawei Yin. 2020. Learning from easy to complex: Adaptive multi-curricula learning for neural dialogue generation. In AAAI. Rich´ard Cs´aky, Patrik Purgai, and G´abor Recski. 2019. Improving neural conversational models with entropy-based data filtering. In ACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In EMNLP. Xiaodong Gu, Kyunghyun Cho, JungWoo Ha, and Sunghun Kim. 2019. Dialogwae: Multimodal response generation with conditional wasserstein autoencoder. In ICLR. Yutai Hou, Yijia Liu, Wanxiang Che, and Ting Liu. 2018. Sequence-to-sequence data augmentation for dialogue language understanding. In COLING. Zhiting Hu, Bowen Tan, Ruslan Salakhutdinov, Tom M. Mitchell, and Eric P. Xing. 2019. Learning data manipulation for augmentation and weighting. In NeurIPS. Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In ICLR. Jing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in NLP. In ACL. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR. Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In ICLR. Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic relations. In NAACL-HLT. 6343 Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL. Gakuto Kurata, Bing Xiang, and Bowen Zhou. 2016. Labeled data generation with encoderdecoder LSTM for semantic slot filling. In INTERSPEECH. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In NAACL-HLT. Juntao Li, Lisong Qiu, Bo Tang, Min Dong Chen, Dongyan Zhao, and Rui Yan. 2019. Insufficient data can also rock! learning to converse using smaller data with augmentation. In AAAI. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. In IJCNLP. Pierre Lison and Serge Bibauw. 2017. Not all dialogues are created equal: Instance weighting for neural conversational models. In SIGDIAL. Pierre Lison and J¨org Tiedemann. 2016. Opensubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In LREC. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In EMNLP. Shuman Liu, Hongshen Chen, Zhaochun Ren, Yang Feng, Qun Liu, and Dawei Yin. 2018. Knowledge diffusion for neural dialogue generation. In ACL. Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook fair’s wmt19 news translation task submission. In Proc. of WMT. Tong Niu and Mohit Bansal. 2019. Automatically learning data augmentation policies for dialogue tasks. In EMNLP. Umaa Rebbapragada and Carla E. Brodley. 2007. Class noise mitigation through instance weighting. In ECML. Mengye Ren, Wenyuan Zeng, Bin Yang, and Raquel Urtasun. 2018. Learning to reweight examples for robust deep learning. In ICML. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In ACL. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C. Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI. Mingyue Shang, Zhenxin Fu, Nanyun Peng, Yansong Feng, Dongyan Zhao, and Rui Yan. 2018. Learning to converse with noisy data: Generation with calibration. In IJCAI. Xiaoyu Shen, Hui Su, Yanran Li, Wenjie Li, Shuzi Niu, Yang Zhao, Akiko Aizawa, and Guoping Long. 2017. A conditional variational framework for dialog generation. In ACL. Haoyu Song, Weinan Zhang, Yiming Cui, Dong Wang, and Ting Liu. 2019. Exploiting persona information for diverse generation of conversational responses. In IJCAI. Bowen Tan, Zhiting Hu, Zichao Yang, Ruslan Salakhutdinov, and Eric P. Xing. 2019. Connecting the dots between MLE and RL for sequence generation. In ICLR Workshop. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2017. Instance weighting for neural machine translation domain adaptation. In EMNLP. Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. Conditional BERT contextual augmentation. In ICCS. Ziang Xie, Sida I. Wang, Jiwei Li, Daniel L´evy, Aiming Nie, Dan Jurafsky, and Andrew Y. Ng. 2017. Data noising as smoothing in neural network language models. In ICLR. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In AAAI. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V. Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. In ICLR. Tiancheng Zhao, Ran Zhao, and Maxine Esk´enazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In ACL. Hao Zhou, Minlie Huang, Tianyang Zhang, Xiaoyan Zhu, and Bing Liu. 2018. Emotional chatting machine: Emotional conversation generation with internal and external memory. In AAAI.
2020
564
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6344–6354 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6344 Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog Libo Qin†, Xiao Xu†, Wanxiang Che†∗, Yue Zhang‡, Ting Liu† †Research Center for Social Computing and Information Retrieval †Harbin Institute of Technology, China ‡School of Engineering, Westlake University, China ‡Institute of Advanced Technology, Westlake Institute for Advanced Study †{lbqin, xxu, car,tliu}@ir.hit.edu.cn, [email protected] Abstract Recent studies have shown remarkable success in end-to-end task-oriented dialog system. However, most neural models rely on large training data, which are only available for a certain number of task domains, such as navigation and scheduling. This makes it difficult to scalable for a new domain with limited labeled data. However, there has been relatively little research on how to effectively use data from all domains to improve the performance of each domain and also unseen domains. To this end, we investigate methods that can make explicit use of domain knowledge and introduce a shared-private network to learn shared and specific knowledge. In addition, we propose a novel Dynamic Fusion Network (DFNet) which automatically exploit the relevance between the target domain and each domain. Results show that our model outperforms existing methods on multi-domain dialogue, giving the state-of-the-art in the literature. Besides, with little training data, we show its transferability by outperforming prior best model by 13.9% on average. 1 Introduction Task-oriented dialogue systems (Young et al., 2013) help users to achieve specific goals such as restaurant reservation or navigation inquiry. In recent years, end-to-end methods in the literature usually take the sequence-to-sequence (Seq2Seq) model to generate a response from a dialogue history (Eric and Manning, 2017; Eric et al., 2017; Madotto et al., 2018; Wen et al., 2018; Gangi Reddy et al., 2019; Qin et al., 2019b; Wu et al., 2019a). Taking the dialogue in Figure 1 as an example, to answer the driver’s query about the “gas station”, the end-to-end dialogue system directly generates system response given the query and a corresponding knowledge base (KB). ∗Email corresponding. Address Distance POI type POI Traffic info 5672 barringer street 5 miles certain address 5672 barringer street no traffic 200 Alester Ave 2 miles gas station Valero road block nearby 899 Ames Ct 5 miles hospital Stanford Childrens Health moderate traffic 481 Amaranta Ave 1 miles parking garage Palo Alto Garage R moderate traffic Driver Address to the gas station. Dialogue Knowledge Base (KB) Car Valero is located at 200 Alester Ave. Car Since there is a road block nearby, I found another route for you and I sent it on your screen. Driver OK , please give me directions via a route that avoids all heavy traffic. Figure 1: Example of a task-oriented dialogue that incorporates a knowledge base (KB) from the SMD dataset (Eric et al., 2017). Words with the same color refers queried entity from the KB. Better viewed in color. Though achieving promising performance, endto-end models rely on a considerable amount of labeled data, which limits their usefulness for new and extended domains. In practice, we cannot collect rich datasets for each new domain. Hence, it is important to consider methods that can effectively transfer knowledge from a source domain with sufficient labeled data to a target domain with limited or little labeled data. Existing work can be classified into two main categories. As shown in Figure 2(a), the first strand of work (Eric and Manning, 2017; Eric et al., 2017; Madotto et al., 2018; Wu et al., 2019a) simply combines multi-domain datasets for training. Such methods can implicitly extract the shared features but fail to effectively capture domain-specific knowledge. As shown in Figure 2(b), The second strand of work (Wen et al., 2018; Qin et al., 2019b) trains model separately for each domain, which can better capture domain-specific features. However, those methods ignore shared knowledge between different domains (e.g. the location word exists in both schedule domain and navigation domain). We consider addressing the limitation of existing work by modeling knowledge connections between domains explicitly. In particular, a simple baseline 6345 No Domain-Specific Features A Module B Module C Module Shared Module No Domain-Shared Features Shared Module A Module B Module C Module Dynamic Domain-Specific Fusion Shared-Specific Fusion (a) (b) (d) A Model B Model C Model Shared Model Dynamic Fusion Mechanism Mixed Data Mixed Data Domain A Domain B Domain C Shared Module A Module B Module C Module Shared-Specific Fusion (c) Basic Shared-Specific Fusion Mechanism Mixed Data Domain A Domain B Domain C Figure 2: Methods for multi-domain dialogue. Previous work either trains a general model on mixed multi-domain mixed datasets (a), or on each domain separately (b). The basic shared-private framework is shown (c). Our proposed extension with dynamic fusion mechanism is shown (d). to incorporate domain-shared and domain-private features is shared-private framework (Liu et al., 2017; Zhong et al., 2018; Wu et al., 2019b). Shown in Figure 2(c), it includes a shared module to capture domain-shared feature and a private module for each domain. The method explicitly differentiates shared and private knowledge. However, this framework still has two issues: (1) given a new domain with extremely little data, the private module can fail to effectively extract the corresponding domain knowledge. (2) the framework neglects the fine-grained relevance across certain subsets of domains. (e.g. schedule domain is more relevant to the navigation than to the weather domain.) To address the above issues, we further propose a novel Dynamic Fusion Network (DF-Net), which is shown in Figure 2 (d). In contrast to the sharedprivate model, a dynamic fusion module (see §2.3) is further introduced to explicitly capture the correlation between domains. In particular, a gate is leveraged to automatically find the correlation between a current input and all domain-specific models, so that a weight can be assigned to each domain for extracting knowledge. Such a mechanism is adopted for both the encoder and the decoder, and also a memory module to query knowledge base features. Given a new domain with little or no training data, our model can still make the best use of existing domains, which cannot be achieved by the baseline model. We conduct experiments on two public benchmarks, namely SMD (Eric et al., 2017) and MultiWOZ 2.1 (Budzianowski et al., 2018). Results show that our framework consistently and significantly outperforms the current state-of-the-art methods. With limited training data, our framework outperforms the prior best methods by 13.9% on average. To our best of knowledge, this is the first work to effectively explore shared-private framework in multi-domain end-to-end task-oriented dialog. In addition, when given a new domain which with few or zero shot data, our extended dynamic fusion framework can utilize fine-grained knowledge to obtain desirable accuracies, which makes it more adaptable to new domains. All datasets and code are publicly available at: https://github.com/LooperXX/DF-Net. 2 Model Architecture We build our model based on a seq2seq dialogue generation model (§2.1), as shown in Figure 3(a). To explicitly integrate domain awareness, as shown in Figure 3(b) we first propose to use a sharedprivate framework (§2.2) to learn shared and the corresponding domain-specific features. Next, we further use a dynamic fusion network (§2.3) to dynamically exploit the correlation between all domains for fine-grained knowledge transfer, which is shown in Figure 3(c). In addition, adversarial training is applied to encourage shared module generate domain-shared feature. 2.1 Seq2Seq Dialogue Generation We define the Seq2Seq task-oriented dialogue generation as finding the system response Y according to the input dialogue history X and KB B. Formally, the probability of a response is defined as p(Y | X, B) = n Y t=1 p(yt | y1, ..., yt−1, X, B), (1) where yt represents an output token. In a vanilla Seq2Seq task-oriented dialogue system (Eric and Manning, 2017), a long short-term Memory network (LSTM, Hochreiter and Schmidhuber (1997)) is used to encode the dialogue history X = (x1, x2, .., xT ) (T is the number of tokens in the dialogue history) to produce shared context-sensitive 6346 External Knowledge (EK) START EK Distribution Domain + START + ℎ"#$,& ' Vocabulary Distribution Shared Encoder Private Encoder Shared-Specific Fusion External Knowledge (EK) EK Distribution + Vocabulary Distribution Shared Encoder Fusion Dynamic Fusion External Knowledge (EK) + Fusion Private Encoder Shared Encoder Domain + START EK Distribution Vocabulary Distribution (a) Baseline model (b) Shared-Private model (c) Dynamic Fusion model Decoder Decoder Decoder ℎ"#$,& ' ℎ"#$,& ' Figure 3: Workflow of our baseline and our proposed model. hidden states H = (h1, h2, ..., hT ): hi = BiLSTMenc  φemb(xi), hi−1  , (2) where φemb(·) represents the word embedding matrix. LSTM is also used to repeatedly predict outputs (y1, y2, ..., yt−1) by the decoder hidden states (hdec,1, hdec,2, ..., hdec,t). For the generation of yt, the model first calculates an attentive representation h ′ dec,t of the dialogue history over the encoding representation H. Then, the concatenation of hdec,t and h ′ dec,t is projected to the vocabulary space V by U: ot = U [hdec,t, h ′ dec,t], (3) where ot is the score (logit) for the next token generation. The probability of next token yt ∈V is finally calculated as: p(yt | y1, ..., yt−1, X, B) = Softmax(ot). (4) Different from typical text generation with Seq2seq model, the successful conversations for taskoriented dialogue system heavily depend on accurate knowledge base (KB) queries. We adopt the global-to-local memory pointer mechanism (GLMP) (Wu et al., 2019a) to query the entities in KB, which has shown the best performance. An external knowledge memory is proposed to store knowledge base (KB) B and dialogue history X. The KB memory is designed for the knowledge source while the dialogue memory is used for directly copying history words. The entities in external knowledge memory are represented in a triple format and stored in the memory module, which can be denoted as M = [B; X] = (m1, . . . , mb+T ), where mi is one of the triplet of M, b and T denotes the number of KB and dialog history respectively. For a k-hop memory network, the external knowledge is composed of a set of trainable embedding matrices C = (C1, . . . , Ck+1). We can query knowledge both in encoder and decoder process to enhance model interaction with knowledge module. Query Knowledge in Encoder We adopt the last hidden state as the initial query vector: q1 enc = hT . (5) In addition, it can loop over k hops and compute the attention weights at each hop k using pk i = Softmax((qk enc)⊤ck i ), (6) where ck i is the embedding in ith memory position using the embedding matrix Ck. We obtain the global memory pointer G = (g1, . . . , gb+T ) by applying gk i = Sigmoid((qk enc)⊤ck i ), which is used to filter the external knowledge for relevant information for decoding. Finally, the model reads out the memory ok by the weighted sum over ck+1 and updates the query vector qk+1 enc . Formally, ok enc = X i pk i ck+1 i , qk+1 enc = qk enc + ok enc. (7) qk+1 enc can be seen as the encoded KB information, and is used to initialized the decoder. Query Knowledge in Decoder we use a sketch tag to denote all the possible slot types that start with a special token. (e.g., @address stands for all the Address). When a sketch tag is generated by Eq. 4 at t timestep, we use the concatenation of the hidden states hdec,t and the attentive representation h ′ dec,t to query knowledge, which is similar with 6347 ule C Module Dynamic Domain-Specific Fusion Shared-Private Fusion ule C Module Dynamic Domain-Specific Fusion Shared-Private Fusion Shared Module A Module B Module C Module Expert Gate MLP Feature Extractor Dynamic Domain-Specific Fusion Shared-Specific Fusion (&,) (&,* (&,+ Figure 4: The dynamic fusion layer for fusing domainshared feature and domain-specific feature. the process of querying knowledge in the encoder: q1 dec = [hdec,t, h ′ dec,t], (8) pk i = Softmax((qk dec)⊤ck i gk i ). (9) Here, we can treat Pt = (pk 1,. . . ,pk b+T ) as the probabilities of queried knowledge, and select the word with the highest probability from the query result as the generated word. 2.2 Shared-Private Encoder-Decoder Model The model in section 2.1 is trained over mixed multi-domain datasets and the model parameters are shared across all domains. We call such model as shared encoder-decoder model. Here, we propose to use a shared-private framework including a shared encoder-decoder for capturing domainshared feature and a private model for each domain to consider the domain-specific features explicitly. Each instance X goes through both the shared and its corresponding private encoder-decoder. Enhancing Encoder Given an instance along with its domain, the shared-private encoderdecoder generates a sequence of encoder vectors denoted as H{s,d} enc , including shared and domainspecific representation from corresponding encoder: H{s,d} enc =(h{s,d} enc,1 , . . . , h{s,d} enc,T ) =BiLSTM{s,d} enc (X). (10) The final shared-specific encoding representation Hf enc is a mixture: Hf enc =W 2(LeakyReLU(W 1[Hs enc, Hd enc])). (11) For ease of exposition, we define the sharedspecific fusion function as: shprivate : (Hs enc, Hd enc) →Hf enc. (12) In addition, self-attention has been shown useful for obtaining context information (Zhong et al., 2018). Finally, we follow Zhong et al. (2018) to use selfattention over Hf enc to get context vector cf enc. We replace hT with cf enc in Eq. 5. This makes our query vector combine the domain-shared feature with domain-specific feature. Enhancing Decoder At t step of the decoder, the private and shared hidden state is: h{s,d} dec,t = LSTM{s,d} dec,t (X). (13) We also apply the shared-specific fusion function to the hidden states and the mixture vector is: shprivate : (hs dec,t, hd dec,t) →hf dec,t. (14) Similarly, we obtain the fused attentive representation hf′ dec,t by applying attention from hf dec,t over Hf enc. Finally, we replace [hdec,t, h ′ dec,t] in Eq. 8 with [hf dec,t, hf′ dec,t] which incorporates shared and domain-specific features. 2.3 Dynamic Fusion for Querying Knowledge The shared-private framework can capture the corresponding specific feature, but neglects the finegrained relevance across certain subsets of domains. We further propose a dynamic fusion layer to explicitly leverage all domain knowledge, which is shown in Figure 4. Given an instance from any domain, we first put it to multiple private encoderdecoder to obtain domain-specific features from all domains. Next, all domain-specific features are fused by a dynamic domain-specific feature fusion module, followed by a shared-specific feature fusion for obtaining shared-specific features. Dynamic Domain-Specific Feature Fusion Given domain-specific features from all domains, a Mixture-of-Experts mechanism (MoE) (Guo et al., 2018) is adapted to dynamically incorporate all domain-specific knowledge for the current input in both encoder and decoder. Now, we give a detailed description on how to fuse the timestep t of decoding and the fusion process is the same to encoder. Given all domain feature representations in t decoding steps: {hdi dec,t}|D| i=1, where |D| represents the number of domains, an expert gate E takes {hdi dec,t} as input and outputs a softmax score αt,i that represents the degree of correlation between each domain and the 6348 current input token. We achieve this by a simple feedforward layer: αt = Softmax(W ∗hd dec,t + b). (15) The final domain-specific feature vector is a mixture of all domain outputs, dictated by the expert gate weights αt = (αt,1, . . . , αt,|D|), which can be written as hdf dec,t = P i αt,ihdi dec,t. During training, take the decoder for example, we apply the cross-entropy loss Lmoe dec as the supervision signal for the expert gate to predict the domain of each token in the response, where the expert gate output αt can be treated as the tth token’s predicted domain probability distribution by multiple private decoder. Hence, the more accurate the domain prediction is, the more correct expert gets: Lmoe dec = − n X t=1 |D| X i=1 (ei · log(αt,i|θs, θm dec)), (16) where θs represents the parameters of encoderdecoder model, θm dec represents the parameters of the MoE module (Eq. 15) in the decoder and ei ∈{0, 1} represents whether the response with n tokens belongs to the domain di. Similarly, we can get the Lmoe enc for the encoder and sum up them as: Lmoe = Lmoe enc + Lmoe dec . Lmoe is used to encourage samples from a certain source domain to use the correct expert, and each expert learns corresponding domain-specific features. When a new domain has little or no labeled data, the expert gate can automatically calculate the correlation between different domains with the target domain and thus better transfer knowledge from different source domains in both encoder and decoder module. Shared-Specific Feature Fusion We directly apply shprivate operation to fuse shared and final domain-specific feature: shprivate : (hs dec,t, hdf dec,t) →hf dec,t. (17) Finally, we denote the dynamic fusion function as dynamic(hs dec,t, {hdi dec,t}|D| i=1). Similar to Section 2.2, we replace [hdec,t, h ′ dec,t] in Eq. 8 with [hf dec,t, hf′ dec,t]. The other components are kept the same as the shared-private encoder-decoder framework. Dataset Domains Train Dev Test SMD Navigate, Weather, Schedule 2,425 302 304 Multi-WOZ 2.1 Restaurant, Attraction, Hotel 1,839 117 141 Table 1: Statistics of datasets. Adversarial Training To encourage the model to learn domain-shared features, we apply adversarial learning on the shared encoder and decoder module. Following Liu et al. (2017), a gradient reversal layer (Ganin and Lempitsky, 2014) is introduced after the domain classifier layer. The adversarial training loss is denoted as Ladv. We follow Qin et al. (2019a) and the final loss function of our Dynamic fusion network is defined as: L = γbLbasic + γmLmoe + γaLadv, (18) where Lbasic keep the same as GLMP (Wu et al., 2019a), γb, γm and γa are hyper-parameters. More details about Lbasic and Ladv can be found in appendix. 3 Experiments 3.1 Datasets Two publicly available datasets are used in this paper, which include SMD (Eric et al., 2017) and an extension of Multi-WOZ 2.1 (Budzianowski et al., 2018) that we equip the corresponding KB to every dialogue.1 The detailed statistics are also presented in Table 1. We follow the same partition as Eric et al. (2017), Madotto et al. (2018) and Wu et al. (2019a) on SMD and (Budzianowski et al., 2018) on Multi-WOZ 2.1. 3.2 Experimental Settings The dimensionality of the embedding and LSTM hidden units is 128. The dropout ratio we use in our framework is selected from {0.1, 0.2} and the batch size from {16, 32}. In the framework, we adopt the weight typing trick (Wu et al., 2019a). We use Adam (Kingma and Ba, 2015) to optimize the parameters in our model and adopt the suggested hyper-parameters for optimization. All hyper-parameters are selected according to validation set. More details about hyper-parameters can be found in Appendix. 3.3 Baselines We compare our model with the following state-ofthe-art baselines. 1The constructed datasets will be publicly available for further research. 6349 SMD Multi-WOZ 2.1 Model BLEU F1 Navigate F1 Weather F1 Calendar F1 BLEU F1 Restaurant F1 Attraction F1 Hotel F1 Mem2Seq (Madotto et al., 2018) 12.6 33.4 20.0 32.8 49.3 6.6 21.62 22.4 22.0 21.0 DSR (Wen et al., 2018) 12.7 51.9 52.0 50.4 52.1 9.1 30.0 33.4 28.0 27.1 KB-retriever (Qin et al., 2019b) 13.9 53.7 54.5 52.2 55.6 GLMP (Wu et al., 2019a) 13.9 60.7 54.6 56.5 72.5 6.9 32.4 38.4 24.4 28.1 Shared-Private framework (Ours) 13.6 61.7 56.3 56.5 72.8 6.6 33.8 39.8 26.0 28.3 Dynamic Fusion framework (Ours) 14.4* 62.7* 57.9* 57.6* 73.1* 9.4* 35.1* 40.9* 28.1* 30.6* Table 2: Main results. The numbers with * indicate that the improvement of our framework over all baselines is statistically significant with p < 0.05 under t-test. Model Entity F1 (%) Test ∆ Full model 62.7 w/o Domain-Shared Knowledge Transfer 59.0 3.7 w/o Dynamic Fusion Mechanism 60.9 1.8 w/o Multi-Encoder 61.0 1.7 w/o Multi-Decoder 58.9 3.8 w/o Adversarial Training 61.6 1.1 Table 3: Ablation tests on the SMD test set. • Mem2Seq (Madotto et al., 2018): the model takes dialogue history and KB entities as input and uses a pointer gate to control either generating a vocabulary word or selecting an input as the output. • DSR (Wen et al., 2018): the model leverages dialogue state representation to retrieve the KB implicitly and applies copying mechanism to retrieve entities from knowledge base while decoding. • KB-retriever (Qin et al., 2019b): the model adopts a retriever module to retrieve the most relevant KB row and filter the irrelevant information for the generation process. • GLMP (Wu et al., 2019a): the framework adopts the global-to-local pointer mechanism to query the knowledge base during decoding and achieve state-of-the-art performance. For Mem2Seq, DSR, KB-retriever 2, we adopt the reported results from Qin et al. (2019b) and Wu et al. (2019a). For GLMP, we rerun their public code to obtain results on same datasets.3 2For Multi-WOZ 2.1 dataset, most dialogs are supported by more than single row, which can not processed by KBretriever, so we compare our framework with it in SMD and Camrest datasets. 3Note that, we find that Wu et al. (2019a) report macro entity F1 as the micro F1, so we rerun their models (https://github.com/jasonwu0731/GLMP) and obtain results. 3.4 Results Follow the prior work (Eric et al., 2017; Madotto et al., 2018; Wen et al., 2018; Wu et al., 2019a; Qin et al., 2019b), we adopt the BLEU and Micro Entity F1 metrics to evaluate model performance. The results on the two datasets are shown in Table 2, we can observe that: 1) The basic shared-private framework outperforms the best prior model GLMP in all the datasets. This indicates that the combination of domain-shared and domain-specific features can better enhance each domain performanc compared with only utilizing the implicit domain-shared features. 2) Our framework achieves the state-of-the-art performance on two multi-domain task-oriented dialog datasets, namely SMD and Multi-WOZ 2.1. On SMD dataset, our model has the highest BLEU compared with baselines, which shows that our framework can generate more fluent response. More importantly, our model outperforms GLMP by 2.0% overall, 3.3% in the Navigate domain, 1.1% in the Weather domain and 0.6% in Schedule domain on entity F1 metric, which indicates that considering relevance between target domain input and all domains is effective for enhancing performance of each domain. On Multi-Woz 2.1 dataset, the same trend of improvement has been witnessed, which further shows the effectiveness of our framework. 3.5 Analysis We study the strengths of our model from several perspectives on SMD dataset. We first conduct several ablation experiments to analyze the effect of different components in our framework. Next, we conduct domain adaption experiments to verify the transferability of our framework given a new domain with little or no labeled data. In addition, we provide a visualization of the dynamic fusion layer and case study to better understand how the module affects and contributes to the performance. 6350 (a) Navigate Domain (b) Weather Domain (c) Schedule Domain Figure 5: Performance of domain adaption on different subsets of original training data. 6.3 4.7 17.6 8.8 11.0 42.7 0 5 10 15 20 25 30 35 40 45 Navigation Weather Schedule GLMP Ours Figure 6: Zero-shot performance (F1 score) on each domain on SMD dataset. The x-axis domain name represents that the domain is unseen and other two domains is the same as original dataset. Model Correct Fluent Humanlike GLMP 3.4 3.9 4.0 Our framework 3.6 4.2 4.2 Agreement 53% 61% 74% Table 4: Human evaluation of responses on the randomly selected dialogue history. 3.5.1 Ablation Several ablation experiments and results are shown in Table 3. In detail, 1) w/o Domain-shared Knowledge Transfer denotes that we remove domainshared feature and just keep fused domain-specific feature for generation. 2) w/o Domain Fusion mechanism denotes that we simply sum all domainspecific features rather than use the MOE mechanism to dynamically fuse domain-specific features. 3) w/o Multi-Encoder represents that we remove multi-encoder module and adopt one shared encoder in our framework. 4) w/o Multi-Decoder represents that we remove the multi-decoder module and adopt one shared decoder. 5) w/o Adversarial Training denotes that we remove the adversarial training in experimental setting. Generally, all the proposed components are effective to contribute the final performance. Specifically, we can clearly observe the effectiveness of our dynamic fusion mechanism where w/o domain-specific knowledge fusion causes 1.8% drops and the same trend in removing domain-shared knowledge fusion. This Figure 7: Distribution of Mix-of-the-expert mechanism across source domains for randomly selected 100 examples in each domain on SMD dataset. further verifies that domain-shared and domainspecific feature are benefit for each domain performance. 3.5.2 Domain Adaption Low-Resource Setting To simulate lowresource setting, we keep two domains unchanged, and the ratio of the except domain from original data varies from [1%, 5%, 10%, 20%, 30%, 50%]. The results are shown in Figure 5. We can find that: (1) Our framework outperforms the GLMP baseline on all ratios of the original dataset. When the data is only 5% of original dataset, our framework outperforms GLMP by 13.9% on all domains averagely. (2) Our framework trained with 5% training dataset can achieve comparable and even better performance compared to GLMP with 50% training dataset on some domains. This implies that our framework effectively transfers knowledge from other domains to achieve better performance for the low-resources new domain. Zero-Shot Setting Specially, we further evaluate the performance of domain adaption ability on the zero-shot setting given an unseen domain. We randomly remove one domain from the training set, and other domain data remained unchanged to train the model. During test, the unseen domain input use the MoE to automatically calculate the correlation between other domains and the current input and get the results. Results are shown in 6351 0.0015 0.1457 0.8527 0 1 Navigation Weather Schedule 0.0249 0.8908 0.0843 0 1 Navigation Weather Schedule Driver: Manhattan, please will it be cloudy on Monday ? Car: Monday will be foggy. Driver: Who will be at the meeting Friday? Car: HR will be at the Friday meeting. Weather@5% Schedule@5% 0.0015 0.1457 0.8527 0 0.5 1 Navigation Weather Schedule Driver: Manhattan, please will it be cloudy on Monday ? Car: Monday will be foggy. Weather@5% 0.1996 0.0242 0.7762 0 0.5 1 Navigation Weather Schedule Dialogue History: Find location and address to home that is nearest me. Your home is at 56 cadwell street. Thanks, set the gps for there. Response: No problem. Figure 8: Case of of expert gate distribution in SMD dataset. Text segments with red color represents appearing in both schedule and navigation domain. Figure 6, we can see our model significantly outperforms GLMP on three domains, which further demonstrate the transferability of our framework. 3.5.3 Visualization of Dynamic Fusion Layer To better understand what our dynamic fusion layer has learned, we visualize the gate distribution for each domain in low-resource (5%) setting, which fuses domain-specific knowledge among various cases. As shown in the Figure 7, for a specific target domain, different examples may have different gate distributions, which indicates that our framework successfully learns how to transfer knowledge between different domains. For example, the navigation column contains 100 examples from its test set and each row show the corresponding expert value. More specifically, in the navigation column, we observe that the expert value in schedule domain is bigger than weather domain, which indicates schedule domain transfers more knowledge to navigation than weather domain. 3.5.4 Case Study Furthermore, we provide one case for navigation domain and their corresponding expert gate distribution. The cases are generated with 5% training data in the navigation domain and other two domain datasets keep the same, which can better show how the other two domains transfer knowledge to the low-resource domain. As shown in Figure 8, the expert value of schedule domain is bigger than the weather domain, which indicates the schedule contributes more than weather domain. In further exploration, we find word “location” and “set” appear both in navigation and schedule domain, which shows schedule has closer relation with navigation than weather, which indicates our model successfully transfers knowledge from the closest domain. 3.5.5 Human Evaluation We provide human evaluation on our framework and other baseline models. We randomly generated 100 responses. These responses are based on distinct dialogue history on the SMD test data. Following Wen et al. (2018) and Qin et al. (2019b), We hired human experts and asked them to judge the quality of the responses according to correctness, fluency, and humanlikeness on a scale from 1 to 5. Results are illustrated in Table 4. We can see that our framework outperforms GLMP on all metrics, which is consistent with the automatic evaluation. 4 Related Work Existing end-to-end task-oriented systems can be classified into two main classes. A series of work trains a single model on the mixed multi-domain dataset. Eric et al. (2017) augments the vocabulary distribution by concatenating KB attention to generatge entities. Lei et al. (2018) first integrates track dialogue believes in end-to-end task-oriented dialog. Madotto et al. (2018) combines end-toend memory network (Sukhbaatar et al., 2015) into sequence generation. Gangi Reddy et al. (2019) proposes a multi-level memory architecture which first addresses queries, followed by results and finally each key-value pair within a result. Wu et al. (2019a) proposes a global-to-locally pointer mechanism to query the knowledge base. Compared with their models, our framework can not only explicitly utilize domain-specific knowledge but also consider different relevance between each domain. Another series of work trains a model on each domain separately. Wen et al. (2018) leverages dialogue state representation to retrieve the KB implicitly. Qin et al. (2019b) first adopts the KB-retriever to explicitly query the knowledge base. Their works consider only domain-specific features. In contrast, our framework explicitly leverages domain-shared features across domains. The shared-private framework has been explored in many other task-oriented dialog components. Liu and Lane (2017) applies a shared-private LSTM to generate shared and domain-specific features. Zhong et al. (2018) proposes a global-local architecture to learn shared feature across all slots and specific feature for each slot. More recently, Zhang et al. (2018) utilizes the shared-private model for text style adaption. In our work, we explore shared-private framework in end-to-end taskoriented dialog to better transfer domain knowledge for querying knowledge base. In addition, we take inspiration from Guo et al. (2018), who successfully apply the mix-of-the-experts (MoE) mechanism in multi-sources domain and cross-lingual 6352 adaption tasks. Our model not only combines the strengths of MoE to incorporate domain-specific feature, but also applies adversarial training to encourage generating shared feature. To our best of knowledge, we are the first to effectively explore shared-private framework in multi-domain end-toend task oriented dialog. 5 Conclusion In this paper, we propose to use a shared-private model to investigate explicit modeling domain knowledge for multi-domain dialog. In addition, a dynamic fusion layer is proposed to dynamically capture the correlation between a target domain and all source domains. Experiments on two datasets show the effectiveness of the proposed models. Besides, our model can quickly adapt to a new domain with little annotated data. Acknowledgements We thank Min Xu, Jiapeng Li, Jieru Lin and Zhouyang Li for their insightful discussions. We also thank all anonymous reviewers for their constructive comments. This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153. Besides, this work also faxed the support via Westlake-BrightDreams Robotics research grant. References Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I˜nigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gaˇsi´c. 2018. MultiWOZ - a large-scale multi-domain wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026, Brussels, Belgium. Association for Computational Linguistics. Mihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D. Manning. 2017. Key-value retrieval networks for task-oriented dialogue. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 37–49, Saarbr¨ucken, Germany. Association for Computational Linguistics. Mihail Eric and Christopher Manning. 2017. A copyaugmented sequence-to-sequence architecture gives good performance on task-oriented dialogue. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 468–473, Valencia, Spain. Association for Computational Linguistics. Revanth Gangi Reddy, Danish Contractor, Dinesh Raghu, and Sachindra Joshi. 2019. Multi-level memory for task oriented dialogs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3744–3754, Minneapolis, Minnesota. Association for Computational Linguistics. Yaroslav Ganin and Victor Lempitsky. 2014. Unsupervised domain adaptation by backpropagation. arXiv preprint arXiv:1409.7495. Jiang Guo, Darsh Shah, and Regina Barzilay. 2018. Multi-source domain adaptation with mixture of experts. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4694–4703, Brussels, Belgium. Association for Computational Linguistics. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He, and Dawei Yin. 2018. Sequicity: Simplifying task-oriented dialogue systems with single sequence-to-sequence architectures. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1437–1447, Melbourne, Australia. Association for Computational Linguistics. Bing Liu and Ian Lane. 2017. Multi-domain adversarial learning for slot filling in spoken language understanding. arXiv preprint arXiv:1711.11310. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-task learning for text classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1–10, Vancouver, Canada. Association for Computational Linguistics. Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018. Mem2Seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1468–1478, Melbourne, Australia. Association for Computational Linguistics. Libo Qin, Wanxiang Che, Yangming Li, Haoyang Wen, and Ting Liu. 2019a. A stack-propagation framework with token-level intent detection for spoken 6353 language understanding. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2078–2087, Hong Kong, China. Association for Computational Linguistics. Libo Qin, Yijia Liu, Wanxiang Che, Haoyang Wen, Yangming Li, and Ting Liu. 2019b. Entityconsistent end-to-end task-oriented dialogue system with KB retriever. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 133–142, Hong Kong, China. Association for Computational Linguistics. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 712, 2015, Montreal, Quebec, Canada, pages 2440– 2448. Haoyang Wen, Yijia Liu, Wanxiang Che, Libo Qin, and Ting Liu. 2018. Sequence-to-sequence learning for task-oriented dialogue with dialogue state representation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3781– 3792, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Chien-Sheng Wu, Richard Socher, and Caiming Xiong. 2019a. Global-to-local memory pointer networks for task-oriented dialogue. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. Haiming Wu, Yue Zhang, Xi Jin, Yun Xue, and Ziwen Wang. 2019b. Shared-private LSTM for multidomain text classification. In Natural Language Processing and Chinese Computing - 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9-14, 2019, Proceedings, Part II, pages 116–128. Steve J. Young, Milica Gasic, Blaise Thomson, and Jason D. Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160–1179. Ye Zhang, Nan Ding, and Radu Soricut. 2018. SHAPED: Shared-private encoder-decoder for text style adaptation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1528–1538, New Orleans, Louisiana. Association for Computational Linguistics. Victor Zhong, Caiming Xiong, and Richard Socher. 2018. Global-locally self-attentive encoder for dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1458– 1467, Melbourne, Australia. Association for Computational Linguistics. 6354 Hyperparameter Name SMD Multi-WOZ 2.1 Batch Size 16 32 Hidden Size 128 128 Embedding Size 128 128 Learning Rate 0.001 0.001 Dropout Ratio 0.2 0.1 Teacher Forcing Ratio 0.9 0.9 Number of Memory Network’s Hop 3 3 Table 5: Hyperparameters we use for SMD and MultiWOZ 2.1 dataset. A Appendices A.1 Hyperparameters Setting The hyperparameters used for SMD and MultiWOZ 2.1 dataset are shown in Table 5. A.2 Basic Loss Function The loss Lbasic used in our Shared-Private Encoder-Decoder Model is the same as GLMP. Different with the standard sequence-to-sequence with attention mechanism model, we use [hf dec,t, hf ′ dec,t] to replace [hdec,t, h ′ dec,t] and then get the sketch word probability distribution P vocab t . Based on the gold sketch response Y s = (ys 1, . . . , ys n), we calculate the standard cross-entropy loss Lv as follows: P vocab t = Softmax(U[hf dec,t, hf ′ dec,t]), (19) Lv = n X t=1 −log(P vocab t (ys t )). (20) Given the system response Y , we get the global memory pointer label sequence Glabel = (ˆg1, . . . , ˆgb+T ) and local memory pointer label sequence Llabel = (ˆl1, . . . , ˆln) as follows: ˆgi =  1 if Object(mi) ∈Y 0 otherwise , (21) ˆlt =  max(z) if ∃z s.t. yt =Object(mz) b + T + 1 otherwise , (22) where mi represents one triplet in the external knowledge M = [B; X] = (m1, . . . , mb+T ) and Object(·) function is denoted as getting the object word from a triplet. Then, the Lg can be written as follows: Lg =− b+T X i=1 (ˆgi · log gi + (1 −ˆgi) · log (1 −gi)) . (23) Based on the Llabel and Pt = (pk 1, . . . , pk b+T ), we can calculate the standard cross-entropy loss Ll as follows: Ll = n X t=1 −log(Pt(ˆlt)). (24) Finally, Lbasic is the weighted-sum of three losses: Lbasic = γgLg + γvLv + γlLl, (25) where γg, γv and γl are hyperparameters. A.3 Adversarial Training We apply a Convolutional Neural Network (CNN) as domain classifier both in the shared encoder and shared decoder to identify the domain of shared representation of dialogue history Hs enc and response Hs dec. Take the encoder for example, based on the Hs enc, we can extract the context representation cs enc by CNN and then βenc ∈R|D| can be calculated as follows: βenc =Sigmoid(LeakyReLU(W enc(cs enc)), (26) Then we train the domain classifier by optimizing its parameters θd to minimize the sequencelevel binary cross-entropy loss Ladv enc as follows: max θs min θd Ladv enc =− |D| X i=1 (ei · log(βenc,i|θs, θd) +(1 −ei) · log(1 −βenc,i|θs, θd)), (27) where βenc,i represents the probability of the input dialogue history belongs to the domain di. Similarly, we can get the Ladv dec and sum up them as: Ladv = Ladv enc + Ladv dec . In order to update the encoder-decoder model parameters θs underlying the domain classifier, we introduce the gradient reversal layer to reverse the gradient direction which trains our model to extract domain-shared features to confuse the classifier. On the one hand, we train the domain classifier to minimize the domain classification loss. On the other hand, we update the parameters of the network underlying the domain classifier to maximize the domain classification loss, which works adversarially towards the domain classifier. This encourages that our shared encoder and decoder are trained to extract domain-shared features.
2020
565
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6355–6365 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6355 Learning Efficient Dialogue Policy from Demonstrations through Shaping Huimin Wang♣† Baolin Peng♠†∗Kam-Fai Wong♣ ♠Microsoft Research ♣The Chinese University of Hong Kong [email protected] {hmwang,kfwong}@se.cuhk.edu.hk Abstract Training a task-oriented dialogue agent with reinforcement learning is prohibitively expensive since it requires a large volume of interactions with users. Human demonstrations can be used to accelerate learning progress. However, how to effectively leverage demonstrations to learn dialogue policy remains less explored. In this paper, we present that efficiently learns dialogue policy from demonstrations through policy shaping and reward shaping. We use an imitation model to distill knowledge from demonstrations, based on which policy shaping estimates feedback on how the agent should act in policy space. Reward shaping is then incorporated to bonus state-actions similar to demonstrations explicitly in value space encouraging better exploration. The effectiveness of the proposed S2Agent is demonstrated in three dialogue domains and a challenging domain adaptation task with both user simulator evaluation and human evaluation. 1 Introduction With the flourishment of conversational assistants in daily life (like Google Assistant, Amazon Alexa, Apple Siri, and Microsoft Cortana), task-oriented dialogues that are able to serve users on certain tasks have increasingly attracted research efforts. Dialogue policy optimization is one of the most critical tasks of dialogue modeling. One of the most straightforward approaches is the rule-based method, which contains a set of expert-defined rules for dialogue modeling. Though rule-based dialogue systems have a reasonable performance in some scenarios, handcrafting such kinds of rules is time-consuming and not scalable. Recently, dialogue policy learning is formulated as a reinforcement learning (RL) problem and tackled with deep RL models (Li et al., 2017; Lipton ∗Corresponding author †Equal Contribution et al., 2018; Peng et al., 2017). It has shown great potentials of using the RL-based method for building robust dialogue systems automatically. However, due to its interactive nature, RL-based agents demand of an environment to operate in. As illustrated in Figure 1, RL-based dialogue agents need to interact with human users and update its policy in an online fashion requiring that the agents have a good online performance from the start of training. In addition, one of the biggest challenges of RL approaches is reward sparsity issue, which leads to exploration in large action space inefficient. As a consequence, training RL-based agents expects a prohibitively large number of interactions to achieve acceptable performance, which may incur a significant amount of expense (Pietquin et al., 2011; Lipton et al., 2016; Peng et al., 2018b). Several attempts are made to improve learning efficiency and tackle reward sparsity issues. Different types of heuristics has been proposed in the form of intrinsic rewards to guide exploration more efficiently (Lipton et al., 2016; Mohamed and Rezende, 2015; Peng et al., 2017, 2018a; Takanobu et al., 2019). When building a dialogue system, it is typically affordable to recruit experts to gather some demonstrations about the expected agent behaviors. We therefore aim to address the aforementioned challenges from a different perspective and assume having access to human-provided demonstrations. In this paper, we investigate how to efficiently leverage these demonstrations to alleviate reward sparsity and improve policy learning quality. Previous work (Lipton et al., 2016) used a simple technique termed as Replay Buffer Spiking (RBS) to pre-fill experience replay buffer with human demonstrations, which yields good performance, especially in the beginning. (Hester et al., 2018) proposed Deep Q-learning from Demonstrations (DQfD) that combines temporal difference updates with a supervised classification loss of actions in demonstrations to 6356 improve learning efficiency in gaming domains. However, whether it is feasible and how to effectively leverage human demonstration in dialogue scenarios are less explored. Hence, in this paper, we propose a new strategy of leveraging human demonstrations to learn dialogue policy efficiently. Our dialogue agent, termed as S2Agent1, learns dialogue policy from demonstrations trough policy shaping and reward shaping. Policy shaping (Griffith et al., 2013) is an approach to incorporating human feedback to advise how policy should behave like experts. It estimates feedback of a state-action pair from human demonstrations and then utilizes the feedback to reconcile the policy from any RL-based agents. This method speeds up learning progress in gaming domains but has not yet been studied in dialogue. However, directly applying policy shaping to dialogue faces several challenges. The original policy shaping uses a tabular analogous method to estimate feedback. This method limits its feasibility for complex problems like dialogue that has large state action representations. To deal with this issue, we propose to use deep neural networks, which represent state-action space with function approximation and distill knowledge from human demonstrations, to estimate feedback. In addition, policy shaping calibrates agents’ behavior in policy space, and it is inherently not designed to tackle reward sparsity issues. Considering this, we further introduce reward shaping to bonus these state-action pairs that are similar to demonstrations. It can be viewed as a shaping mechanism explicitly in value space to guide policy exploration towards actions which human experts likely conduct. Our contributions in this work are two-fold: • We propose a novel S2Agent that can effectively leverage human demonstrations to improve learning efficiency and quality through policy shaping and reward shaping. • We experimentally show that S2Agent can efficiently learn good policy with limited demonstrations on three single domain dialogue tasks and a challenging domain adaptation task using both simulator and human evaluations. 1Agent with policy Shaping and reward Shaping 2 Related Work Dialogue policy learning Deep reinforcement learning (RL) methods have shown great potential in building a robust dialog system automatically (Young et al., 2013; Su et al., 2016; Williams et al., 2017; Peng et al., 2017, 2018a,b; Lipton et al., 2018; Li et al., 2020; Lee et al., 2019). However, RL-based approaches are rarely used in realworld applications, for these algorithms often require (too) many experiences for learning due to the sparse and uninformative rewards. A lot of progress is being made towards mitigating this sample complexity problem by incorporating prior knowledge. (Su et al., 2017) utilizes a corpus of demonstration to pre-train the RL-based models for accelerating learning from scratch. (Chen et al., 2017b) attempts to accelerate RL-based agents by introducing extra rewards from a virtual rule-based teacher. However, the method requires extra efforts to design a rule-based dialogue manager. (Hester et al., 2018) improve RL learning by utilizing a combination of demonstration, temporal difference (TD), supervised, and regularization losses. (Chen et al., 2017a) introduced a similar approach called companion teaching to incorporate human teacher feedback into policy learning. Nevertheless, companion teaching assumes that there is a human teacher to directly give a correct action during policy learning process and meanwhile train an action prediction model for reward shaping based on human feedback. Policy shaping Policy Shaping is an algorithm that enables introducing prior knowledge into policy learning. (Griffith et al., 2013) formulates human feedback on the actions from an agent policy as policy feedback and proposes Advise algorithm to estimate humans Bayes feedback policy and combine it with the policy from the agent. It shows significant improvement in two gaming environment. (Misra et al., 2018) uses policy shaping to bias the search procedure towards semantic parses that are more compatible with the text and achieve excellent performance. Reward shaping Reward shaping leverages prior knowledge to provides a learning agent with an extra intermediate reward F in addition to environmental reward r, making the system learn from a composite signal R + F (Ng et al., 1999). However, it is not guaranteed that with reward shaping, an MDP can still have an optimal policy that is 6357 Policy Model Imitation Model User Policy Shaping Human Demonstrations Reward Shaping Supervised Learning 𝜋𝑎𝑎|𝑠 𝜋𝑒𝑎|𝑠 g(𝜋𝑎, 𝜋𝑒) (𝑠, 𝑎, 𝑟, 𝑠′) 𝑟= 𝑟+ 𝐹𝑑 Figure 1: Illustration of the S2Agent for dialogue policy learning. identical to the original problem unless the shaping is potential-based reward shaping(Ng et al., 1999; Marthi, 2007). (Su et al., 2015) proposes to use RNNs to predict turn-level rewards and use the predicted reward as informative reward shaping potentials. (Peng et al., 2018a; Takanobu et al., 2019) use inverse reinforcement learning to recover reward functions from demonstrations for reward shaping. However, the estimated reward using these methods inevitably contains noise and failed to conform to potential-based reward function to guarantee the optimal policy. Inspired by (Brys et al., 2015), we directly estimate potential-based reward function from demonstrations. 3 Approach Our S2Agent is illustrated in Figure 1, consisting of four modules. 1) Dialogue policy model which selects the best next action based on the current dialogue state.; 2) Imitation Model is formulated as a classification task that takes dialogue states as input and predicts associated dialogue action, aiming to distill behaviors from human demonstrations.; 3) Policy Shaping provides feedback on how policy should behave like demonstrations. It then reconciles a final action based on actions from the policy model and imitation model attempting to generate more reliable exploration trajectories; 4) Followed by a reward shaping module that encourages demonstration similar state-actions by providing extra intrinsic reward signals. 3.1 Policy Model We consider dialogue policy learning as a Markov Decision Process (MDP) problem and improve the policy with Deep Q-network (DQN) (Mnih et al., 2015).2 In each turn, the agent observes the dialogue state s, and then execute the action a with ϵ-greedy exploration that selects a random action with probability ϵ or adopts a greedy policy a = argmaxa′ Q(s, a′; θ), where Q(s, a′; θ) approximates the value function, implemented as a multi-layer perceptron (MLP) parameterized by θ. The agent then receives the reward r, perceives the next user response to au, and updates the state to s′. The tuple (s, a, r, s′) is stored in the experience replay Da. This loop continues until the dialogue terminates. The parameters of Q(s, a′; θ) are updated by minimizing the following square loss with stochastic gradient descent: L(θ) = E(s,a,r,s′)∼Da[(yi −Q(s, a; θ))2] yi = r + γ max a′ Q′(s′, a′; θ′) (1) where γ ∈[0, 1] is a discount factor, and Q(.) is the target value function that is only periodically updated (line 26 in Algorithm 1). By differentiating the loss function with regard to θ, we derive the following gradient: ∇θL(θ) = E(s,a,r,s′)∼Da[(r+ γ max a′ Q′(s′, a′; θ′)− Q(s, a; θ))∇θQ(s, a; θ)] (2) As shown in lines 25-26 in Algorithm 1, in each iteration, we update Q(.) using minibatch Deep Q-learning. 3.2 Imitation Model We assume having access to a corpus of humanhuman dialogues either from a log file or provided by recruited experts, which in this paper are termed as human demonstrations De. De usually consists of a set of state-action pairs [(s1, a1), (s2, a2), ..., (sn, an)]. Theoretically, if De is large enough to cover all the possible states, then the agent can respond perfectly by looking up the corresponding action from De. However, in practice, De is usually limited and can not cover all the states. Hence, we propose to use a supervised learning model (denoted as Imitation Model) to parameterize the relation of the 2Our shaping methods are compatible with any policy optimizer. In this paper, we employ DQN due to its simplicity and robustness in training. However, replacing with other methods like Actor-Critic is straightforward. 6358 states and actions expecting it to generalize to unseen state. We formulate the task as a classification problem. It takes dialogue si as input and is trained with cross-entropy to minimize loss between action ai and predicted action a. There are multiple models like RNN, CNN can be used for this purpose, but for simplicity, we choose to use MLP. 3.3 Policy Shaping Incorporating human feedback into RL can accelerate its learning progress (Griffith et al., 2013; Cederborg et al., 2015). Policy shaping is a representative that estimates human’s Bayes optimal feedback policy and then combine the feedback policy with the policy of an underlying RL model. The feedback policy is computed with the following equation: πe(a|s) = C∆s,a C∆s,a + (1 −C)∆s,a (3) where ∆s,a is the difference between the number of positive feedback and negative feedback, i.e. the number of occurrence of (s, a) in human demonstrations. C here means the probability of consistency feedback from demonstrations 3. For example, C = 0.7 means with 0.7 probability the feedback from the demonstrations is considered reliable. Otherwise, if C = 0.5, then policy shaping is meaningless since it treats every action equally. However, ∆s,a is difficult to estimate from the demonstrations in dialogue scenarios since the state and action are large and sparse. To deal with this issue, we propose to use the aforementioned Imitation Model to estimate feedback from demonstrations. Specifically, we samples N times from imitation model policy πe(a|s) to form a committee a1, a2, ..., aN denoting N votes. Then we count for each action to generate ca as positive feedback from human demonstrations. We use the expectation of binomial distribution N ∗(1 −C) as the number of negative feedback. Such that, in dialogue, we use: ∆s,a = ca −N ∗(1 −C) (4) Finally, the policy is reconciled from the policy model and the imitation model by multiplying them together: π(a|s) = πa(a|s) × πe(a|s) P a πa(a|s) × πe(a|s) (5) 3It is a parameter to control noise in the demonstrations. Policy shaping operates in the policy space and can be viewed as a mechanism of biasing the agent learning towards the policy distilled from the demonstrations to improve learning efficiency. The reconciled policy in equ. 5 allows the underlying RL model surpass the imitation model πe. Algorithm 1 S2Agent learining algorithm Input: N, ϵ, θ, C, Da, De, γ, Z Output: Qθ(s, a). 1: init experience replay Da as empty. 2: init Qθ(s, a) and Q′ θ′(s, a) with θ = θ′. 3: init demo buffer De with human conversation data. Train Expert with De and load πe(a|s). 4: for n=1:N do 5: user starts a dialogue with user action au. 6: init dialogue state s. 7: while s is not terminal do 8: with probability ϵ select a random action a. 9: otherwise select a = argmaxa Q(s, a; θ). 10: #policy shaping starts 11: count the number of occurrence for each action and then compute ∆a with equ.4. 12: obtain shaped action distribution from policy shaper following equ.3. 13: reconcile the final action distribution as 5 and sample action a. 14: #policy shaping ends 15: execute a, obtain next state s′, receive reward r. 16: calculate φn(s, a) with equ.9. 17: if n > 1 then 18: #reward shaping starts 19: obtain FD with equ.7. 20: Store transition (s, a, r + FD, s′) in Da 21: #reward shaping ends 22: end if 23: end while 24: Sample mini batch of (s, a, r, s′) from Da 25: update Qθ via minibatch Q-learning according to gradient of equ.1. 26: every Z steps reset Qθ = Q′ θ. 27: end for 3.4 Reward Shaping Most of the reward functions in dialogue scenarios are usually manually defined. Typically, a -1 for each turn and a significant positive or negative reward indicating the status of the dialogue at the end of a session. Such sparse reward is one of the reasons that RL agents have poor learning efficiency. Initially, the agents are fain to explore state-action uniformly at random. To this end, we propose to use reward shaping to integrate priors into RL learning to alleviate reward sparsity. Reward shaping is a popular method to integrate prior knowledge into reward function to improve policy exploration (Brys et al., 2015). It provides the learning agent with an extra intermediate and 6359 task-related reward that enriches the original reward signal: r′(s, a) = r(s, a) + FD(·) (6) Where FD denotes rewards from demonstrations. However, modifying the reward function may change the original MDPs and make the agent converge to a suboptimal point. (Wiewiora et al., 2003) proved that the MDP keeps unchanged and maintains convergency property if FD(·) is defined as: FD(s, a, s′, a′) = γφD(s′, a′) −φD(s, a) (7) where φD(s, a) is a potential function of stateaction pair. Its definition is intuitive. We bonus these policy paths that were consistent with the demonstrations. As such, the value of φD(s, a) is expected to be high when action a is demonstrated in a state sd similar to s, and if s is completely different from sd a, φD(s, a) should be close to 0. To achieve this, multi-variate Gaussian is used to compute the similarity between state-action pairs. G(s, a, sd, ad) = ( e(−1 2 (s−sd a)T Σ−1(s−sd a)), a = ad 0 otherwise (8) We search through the demonstrations to obtain the sample with highest similarity: φD(s, a) = max sda G(s, sd a) (9) Using reward shaping to learn policy has several advantages. It leverages demonstrations to bonus these state-actions that are similar to demonstrations. The reward calculated from reward shaping is more informative and demonstration guided than the human-defined reward, which mitigates the reward sparsity issue to some degree. 4 Experiments and Results We evaluate the proposed S2Agent with a user simulator on several public task-oriented datasets, including movie ticket booking, restaurant reservation, and taxi reservation. Additionally, to asses the generalization capability of shaping mechanism, We conduct domain adaptation experiments. Finally, human evaluation results are reported. 4.1 Dataset The raw conversation data in the movie ticket booking task are collected through Amazon Mechanical Turk, and the data for the restaurant reservation and taxi calling scenario is provided by Microsoft Dialogue Challenge 4. The three datasets have been manually labeled based on a schema defined by domain experts. We extend and annotated movie booking task with a payment scenario to simulate the situation of extending the dialogue system with new slots and values. All datasets contain 11 intents. The movie dataset contains 13 slots, and the other three contain 29 slots. Detailed information about the intents and slots is provided in Appendix A table 3. 4.2 Baseline Agents To benchmark the performance of the shaping mechanism, we have developed different versions of task-completion dialogue agents for comparison as follows: • Imitation Model (IM) agent is implemented with Multi-Layer Perception and trained with the human demonstrations data to predict actions given dialogue states. • DQN agent is learned with Deep Q-Network. • EAPC Teaching via Example Action with Predicted Critique (EAPC) introduced in (Chen et al., 2017a) leverages real-time human demonstrations to improve policy learning. EAPC assumes the existence of human teachers during the learning process. It receives example actions from human teachers and, in the meantime, trains an action prediction model with the example actions as a critic for turn-level reward shaping. Since human teachers are not available in our case, we implement EAPC in the absence of teachers but use the same amount of human demonstrations to train a weak action prediction model. If the predicted action is identical to the action given by the policy model, the agent receives an extra positive reward otherwise an extra negative reward. This method can be viewed as a variant of S2Agent with only reward shaping using noise reward estimations from the imitation model. • DQfD (Hester et al., 2018) agent also leverages human demonstrations to improve policy learning. It adds additional classification 4https://github.com/xiul-msr/e2e_ dialog_challenge 6360 (a) Movie (b) Restaurant (c) Taxi Figure 2: Learning curves of all the agents in Movie, Restaurant and Taxi domains. All the agents use the same amount of human demonstrations. Table 1: The performance of the average turn and average reward of different agents in different domains. w/o rs denotes S2Agent without reward shaping; w/o ps denotes S2Agent without policy shaping;∗denotes significant level p < 0.05 with other baselines except DQfD in movie domain. Succ. denotes success rate. Agent Movie Restaurant Taxi Movie-Ext Succ.↑Turn↓Reward↑Succ.↑Turn↓Reward↑Succ.↑Turn↓Reward↑Succ.↑Turn↓Reward↑ IM 0.33 32.62 -11.47 0.16 37.56 -52.03 0.22 15.07 -27.33 0.37 35.38 -8.84 DQN 0.82 37.13 21.57 0.51 50.84 -31.65 0.84 45.69 -10.08 0.66 57.01 -40.37 EAPC 0.82 30.66 42.22 0.53 48.07 -24.86 0.88 38.71 19.10 0.65 55.34 -31.50 DQfD 0.81 27.53 50.57 0.52 43.90 -5.10 0.86 34.88 32.33 0.61 53.64 -19.34 S2Agent ∗ 0.82 21.25 72.68 0.57 38.35 16.40 0.87 27.25 61.50 0.70 49.97 3.39 w/o rs 0.82 23.30 69.20 0.57 39.66 11.85 0.88 28.14 55.47 0.67 51.03 -4.93 w/o ps 0.80 31.68 40.28 0.57 45.40 -9.45 0.85 35.39 28.20 0.65 53.68 -19.27 loss from human demonstrations to DQN to ensure that the agent predicts correct actions on human demonstrated states. In the early learning phase, DQfD is trained only with the demonstrations to obtain a policy that mimics the human. Then, accumulated experiences mixed with the demonstration are used to train DQfD. • S2Agent is our proposed agent that is trained with both policy shaping and reward shaping, as described in Algorithm 1. • S2Agent w/o rs is a variant of S2Agent which learns policy with only policy shaping to reconcile the final action. • S2Agent w/o ps is a variant of S2Agent but only has reward shaping to bonus state-actions similar to demonstrations. Implementation Details Imitation model agents for all domains are single layer MLPs with 50 hidden dimensions and tanh as the activation function. The IM agent is also used in policy shaping to reconcile the policy. All RL-based agents (DQN, DQfD, S2Agent ) are MLPs with tanh activations. Each policy network Q(.) has one hidden layer with 60 hidden nodes. All the agents are trained with the same set of hyper-parameters. ϵ-greedy is utilized for policy exploration. We set the discount factor as γ = 0.9. The target network is updated at the end of each epoch. To mitigate warm-up issues, We build a naive but occasionally successful rule-based agent to provide experiences in the beginning. For a fair comparison, we pre-fill the experience replay buffer Da with human demonstrations for all the variants of agents (Lipton et al., 2016). Confidence factor C used in policy shaping is set 0.7. As for the reward shaping, γ in equ.7 is set as 1. 4.3 User Simulator Training RL-based dialogue agents require an environment to interact with, and it usually needs a large volume of interactions to achieve good performance, which is not affordable in reality. It is commonly acceptable to employ a user simulator to train RL-based agents (Jain et al., 2018; Li et al., 2016; Schatzmann et al., 2007). We adopt a public available agenda-based user simulator (Li et al., 2016) for our experiment setup. During training, the simulator provides the agent with responses and rewards. The reward is defined as -1 for each turn to encourage short turns and a 6361 (a) Movie (b) Restaurant (c) Taxi Figure 3: The effect of number of human demonstration on the performance. The moving averaged success rate is calculated within 120 epochs for Movie, 200 epochs for Restaurant, and 200 epochs for Taxi. large positive reward (2L) for successful dialogue or a negative reward of L for failed one, where L (set as 70) is the maximum number of turns in each dialogue. A dialogue is considered successful only if the agent helps the user simulator accomplish the goal and satisfies all the user’s search constraints. In addition, the average number of turns and the average reward are also reported to evaluate each model. 4.4 Simulator Evaluation Main Results. The main simulation results are shown in Table 1 and Figure.2, 3, 4. The results show that with shaping mechanisms, S2Agent learns much faster and performs consistently better than DQN and DQfD in all the domains with a statistically significant margin. Figure 2 shows the learning curve of different agents in different domains. Firstly, the DQN agent performs better than the IM agent, which is not surprising since it interacts with the simulator and is optimized to solve user goals. DQfD and EAPC agents leverage human demonstrations to mitigate the reward sparsity issues. Their performances are consistently better than DQN. Besides, S2Agent w/o ps uses reward shaping to alleviate reward sparsity by bonusing additional rewards for states that are consistent with demonstrations. As a consequence, it performs better than DQN in all the domains. Though EAPC has a similar reward shaping mechanism, its reward estimation relies heavily on the qualify of the action prediction model. As such, EAPC performs slightly worse than S2Agent w/o ps. In addition, policy shaping reconciles the agent action with knowledge learned from human demonstrations. It biases the agent to explore these actions which human expert does. As shown in figure 2, S2Agent w/o rs learn the dialogue policy much faster than all the baselines. In the Movie domain, it achieves nearly a 60% success rate using only 20 epochs. By contrast, the second-best agent DQfD only achieves a 20% successful rate at epoch 20. Similar results are also observed in Restaurant and Taxi domains. When integrating both policy shaping and reward shaping to DQN, S2Agent achieves the best performance and is more data-efficient. For example, S2Agent in the Taxi domain achieves approximately 60% successful rate at 50 epoch while the following competitor only has around 40% successful rate. The above observation also confirms that policy shaping and reward shaping operate in different dimensions, which means policy shaping improves the learning by directly calibrating in the action space and reward shaping in the value function space, and are mutual-complementary. Noted that the improvement of combining policy shaping and reward shaping in the Movie domain is not as significant as that in Restaurant and Taxi. This is too large degree attributed to the increased complexity of Restaurant and Taxi dataset, which have two times more slots than the Movie dataset, meaning that the state-action space is much larger than the movie domain and posing more challenges in exploration. Under this situation, policy shaping and reward shaping benefit the S2Agent to a large extent. Results of training with varying number of demonstrations. Intuitively, the number of human demonstrations has a large impact on policy learning. The imitation model agent might be able to summarize a good expert policy when a large volume of human demonstrations is available. However, we hope the shaping mechanism is capable of improving learning efficiency with limited human demonstrations for RL-base agents. As such, we 6362 Figure 4: Learning curves of different agents in MovieExt domain, all the agents are adapted from trained agents in Movie domain. experiment with different sizes of demonstrations between 25 and 125 to asses the effect of different numbers of human demonstration on learning efficiency and quality. Figure 3 shows the average performance of each agent during learning, which indicates the learning speed and quality. Our proposed shaping mechanisms improve policy learning speed and quality and are robust to the number of demonstrations. Even with the small number of human demonstrations as 25, S2Agent achieves a 5% higher success rate than DQfD and EAPC in the Movie domain and 10% in the Taxi domain. As the number of demonstrations increases, the gap between DQfD and S2Agent becomes larger, showing that policy mechanisms can still benefit from more human demonstrations available. Results of domain extension Typically, RLbased agents are built with a fixed ontology. However, a dialogue system should be able to evolve as being used to handle new intents, slots, unanticipated actions from users. To asses the ability of quickly adapting to the new environment, we extend existing movie user simulator, denoted as Movie-Ext, to simulate domain adaption scenario. Movie-Ext has an additional payment task requiring the agent to converse with users to firstly book a ticket and then finish the payment. Details about the extended intent/slots can be found in the in appendix Table.3. All the agents are continually optimized from the previously trained agents for the movie ticket booking task. Meanwhile, we additionally collect a small number of human demonstrations to update the IM agent. Figure 4 shows the learning curves of different agents on the extended task. As we can see, both S2Agent and S2Agent w/o rs can quickly adapt to the new environment and outperform the IM agent, with only 150 epochs it achieves around 50% success rate. Though DQfD explicitly leverages human demonstrations, it still lags behind w/o rs, showing that shaping in the policy space is more effective than solely adding supervised learning loss for Q-learning. Reward shaping also benefits DQN to explore better policy. These observations confirm that S2Agent with shaping mechanism is capable of quickly adapting to the new environment. 4.5 Human Evaluation User simulators are not necessary to reflect the complexity of human users (Dhingra et al., 2017). To further evaluate the feasibility of S2Agent in real scenarios, We deploy the agents in Table 1 to interact with real human users in Movie and MovieExt domains 5. Table 2: Human evaluation results on Movie and Movie-Ext domains. We use models at epoch 50 and epoch 200 for Movie domain and Movie-Ext, respectively. w/o rs denotes S2Agent without reward shaping; w/o ps denotes S2Agent without policy shaping; ∗denotes significant level p < 0.05 with other agents. Succ. denotes success rate. Model Movie Movie-Ext Succ.↑ Rating↑ Succ.↑ Rating↑ IM 0.42 3.92 0.40 1.96 DQN 0.56 3.36 0.26 2.68 EAPC 0.68 3.96 0.34 3.12 DQfD 0.72 3.92 0.50 3.24 S2Agent ∗ 0.74 4.36 0.62 3.56 w/o rs 0.72 4.26 0.46 2.94 w/o ps 0.70 4.12 0.52 3.20 All evaluated agents are trained with 50 epochs and 200 epochs for Movie and Movie-Ext respectively. In each dialogue session, one of the agents is randomly selected to converse with a human user. Each user is assigned with a goal sampled from the corpus and is instructed to converse with the agent to complete the task. Users have the choice of terminating the task and ending the session at any time if users believe that the dialogue is unlikely to succeed or simply because the agent repeats for several turns. In such a case, the session is considered as a failure. Finally, at the end of each session, users are required to give explicit feedback on whether the dialogue succeeded (i.e., whether 5For the time and cost consideration, we only conduct experiments on Movie and Movie-Ext domains. 6363 the movie tickets were booked (and paid) with all the user constraints satisfied). Additionally, users are requested to rate the session on a scale from 1 to 5 about the quality/naturalness (5 is the best, 1 is the worst). We collect 50 dialogue sessions for each agent. The results are listed in Table 2. S2Agent and S2Agent w/o rs perform consistently better than DQN and DQfD, which is consistent with what we have observed in simulation evaluation. In addition, S2Agent achieves the best performance in terms of success rate and user rating. 5 Conclusion In this paper, we present a new strategy for learning dialogue policy with human demonstrations. Compared with previous work, our proposed S2Agent is capable of learning in a more efficient manner. By using policy shaping and reward shaping, S2Agent can leverage knowledge distilled from the demonstrations to calibrate actions from underlying RL agents for better trajectories, and obtains extra rewards for these state-actions similar to demonstrations alleviating reward sparsity for better exploration. The results of simulation and human evaluation show that our proposed agent is efficient and effective in both single domain and a challenging domain adaptation setting. Acknowledgments We appreciate the efforts from the anonymous reviewers; they have helped us improve this paper a lot. The research described in this paper is partially supported by Hong Kong RGC-GRF grant 14204118. References Tim Brys, Anna Harutyunyan, Halit Bener Suay, Sonia Chernova, Matthew E Taylor, and Ann Now´e. 2015. Reinforcement learning from demonstration through shaping. In Twenty-Fourth International Joint Conference on Artificial Intelligence. Thomas Cederborg, Ishaan Grover, Charles L Isbell, and Andrea L Thomaz. 2015. Policy shaping with human teachers. In Twenty-Fourth International Joint Conference on Artificial Intelligence. Lu Chen, Runzhe Yang, Cheng Chang, Zihao Ye, Xiang Zhou, and Kai Yu. 2017a. On-line dialogue policy learning with companion teaching. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 198–204. Lu Chen, Xiang Zhou, Cheng Chang, Runzhe Yang, and Kai Yu. 2017b. Agent-aware dropout dqn for safe and efficient on-line dialogue policy learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2454–2464. Bhuwan Dhingra, Lihong Li, Xiujun Li, Jianfeng Gao, Yun-Nung Chen, Faisal Ahmed, and Li Deng. 2017. Towards end-to-end reinforcement learning of dialogue agents for information access. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 484–495. Shane Griffith, Kaushik Subramanian, Jonathan Scholz, Charles L Isbell, and Andrea L Thomaz. 2013. Policy shaping: Integrating human feedback with reinforcement learning. In Advances in neural information processing systems, pages 2625–2633. Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Ian Osband, et al. 2018. Deep q-learning from demonstrations. In ThirtySecond AAAI Conference on Artificial Intelligence. Alankar Jain, Florian Pecune, Yoichi Matsuyama, and Justine Cassell. 2018. A user simulator architecture for socially-aware conversational agents. In Proceedings of the 18th International Conference on Intelligent Virtual Agents, pages 133–140. ACM. Sungjin Lee, Qi Zhu, Ryuichi Takanobu, Zheng Zhang, Yaoqin Zhang, Xiang Li, Jinchao Li, Baolin Peng, Xiujun Li, Minlie Huang, and Jianfeng Gao. 2019. ConvLab: Multi-domain end-to-end dialog system platform. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 64–69, Florence, Italy. Association for Computational Linguistics. Jinchao Li, Baolin Peng, Sungjin Lee, Jianfeng Gao, Ryuichi Takanobu, Qi Zhu, Minlie Huang, Hannes Schulz, Adam Atkinson, and Mahmoud Adada. 2020. Results of the multi-domain task-completion dialog challenge. In Proceedings of the 34th AAAI Conference on Artificial Intelligence, Eighth Dialog System Technology Challenge Workshop. Xiujun Li, Yun-Nung Chen, Lihong Li, Jianfeng Gao, and Asli Celikyilmaz. 2017. End-to-end taskcompletion neural dialogue systems. arXiv preprint arXiv:1703.01008. Xiujun Li, Zachary C Lipton, Bhuwan Dhingra, Lihong Li, Jianfeng Gao, and Yun-Nung Chen. 2016. A user simulator for task-completion dialogues. arXiv preprint arXiv:1612.05688. Zachary Lipton, Xiujun Li, Jianfeng Gao, Lihong Li, Faisal Ahmed, and Li Deng. 2018. Bbq-networks: 6364 Efficient exploration in deep reinforcement learning for task-oriented dialogue systems. In ThirtySecond AAAI Conference on Artificial Intelligence. Zachary C Lipton, Jianfeng Gao, Lihong Li, Xiujun Li, Faisal Ahmed, and Li Deng. 2016. Efficient exploration for dialog policy learning with deep bbq networks & replay buffer spiking. CoRR abs/1608.05081. Bhaskara Marthi. 2007. Automatic shaping and decomposition of reward functions. In Proceedings of the 24th International Conference on Machine learning, pages 601–608. ACM. Dipendra Misra, Ming-Wei Chang, Xiaodong He, and Wen-tau Yih. 2018. Policy shaping and generalized update equations for semantic parsing from denotations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2442–2452, Brussels, Belgium. Association for Computational Linguistics. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. Nature, 518(7540):529. Shakir Mohamed and Danilo Jimenez Rezende. 2015. Variational information maximisation for intrinsically motivated reinforcement learning. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2125–2133. Andrew Y Ng, Daishi Harada, and Stuart Russell. 1999. Policy invariance under reward transformations: Theory and application to reward shaping. In ICML, volume 99, pages 278–287. Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, Yun-Nung Chen, and Kam-Fai Wong. 2018a. Adversarial advantage actor-critic model for taskcompletion dialogue policy learning. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6149–6153. IEEE. Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Kam-Fai Wong. 2018b. Deep dyna-q: Integrating planning for task-completion dialogue policy learning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 2182–2192. Baolin Peng, Xiujun Li, Lihong Li, Jianfeng Gao, Asli Celikyilmaz, Sungjin Lee, and Kam-Fai Wong. 2017. Composite task-completion dialogue policy learning via hierarchical deep reinforcement learning. arXiv preprint arXiv:1704.03084. Olivier Pietquin, Matthieu Geist, Senthilkumar Chandramohan, and Herv´e Frezza-Buet. 2011. Sampleefficient batch reinforcement learning for dialogue management optimization. ACM Transactions on Speech and Language Processing (TSLP), 7(3):7. Jost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, and Steve Young. 2007. Agenda-based user simulation for bootstrapping a pomdp dialogue system. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers, pages 149–152. Association for Computational Linguistics. Pei-Hao Su, Pawel Budzianowski, Stefan Ultes, Milica Gasic, and Steve Young. 2017. Sample-efficient actor-critic reinforcement learning with supervised data for dialogue management. arXiv preprint arXiv:1707.00130. Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina RojasBarahona, Stefan Ultes, David Vandyke, TsungHsien Wen, and Steve Young. 2016. Continuously learning neural dialogue management. arXiv preprint arXiv:1606.02689. Pei-Hao Su, David Vandyke, Milica Gasic, Nikola Mrksic, Tsung-Hsien Wen, and Steve Young. 2015. Reward shaping with recurrent neural networks for speeding up on-line policy learning in spoken dialogue systems. arXiv preprint arXiv:1508.03391. Ryuichi Takanobu, Hanlin Zhu, and Minlie Huang. 2019. Guided dialog policy learning: Reward estimation for multi-domain task-oriented dialog. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 100– 110, Hong Kong, China. Association for Computational Linguistics. Eric Wiewiora, Garrison W Cottrell, and Charles Elkan. 2003. Principled methods for advising reinforcement learning agents. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pages 792–799. Jason D Williams, Kavosh Asadi, and Geoffrey Zweig. 2017. Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning. arXiv preprint arXiv:1702.03274. Steve Young, Milica Gaˇsi´c, Blaise Thomson, and Jason D Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160–1179. 6365 Table 3: The data annotation schema. Movie Restaurant Taxi Movie-Ext Slots city, numberofpeople, theater, zip, distanceconstraints, theater chain, video format, state, starttime, date, moviename, ticket, taskcomplete city, closing, date, distanceconstraints, cuisine, greeting, restaurantname, numberofpeople, numberofkids, taskcomplete, other, pricing, starttime, state, zip, address, reservation, theater, atmosphere, rating, dress code, food, mealtype, choice, seating, occasion, personfullname, phonenumber, restauranttype car type, city, closing, car level, date, distanceconstraints, dropoff location, greeting, name, driver id, numberofpeople, other, pickup location, dropoff location city, budget, pickup location city, pickup time, speed, state, cost, taxi company, mc list, taskcomplete, taxi, zip, result, driver level, numberofkids, emergency degree city, numberofpeople, theater,zip, distanceconstraints, theater chain, video format, state, starttime, date, moviename, ticket, taskcomplete, bill, cost, tax, bill number, bank, service fee, pay type, discount, consumption point, credit card point Intent request, inform ,confirm question, confirm answer, greeting, closing, multiple choice, thanks, welcome, deny, not sure Table 4: The performance of Imitation Model on different dataset. Domain #Pair Precision Recall F1-score Movie 50 0.76 0.86 0.81 Restaurant 50 0.73 0.80 0.76 Taxi 50 0.83 0.90 0.86 Movie-Ext 100 0.84 0.83 0.82 A Appendices Table 3 lists all annotated dialogue acts and slots in details. Table 4 lists the training results of Imitation Model on all dataset.
2020
566
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6366–6375 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6366 SAS: Dialogue State Tracking via Slot Attention and Slot Information Sharing Jiaying Hu1, Yan Yang1,3∗, Chengcai Chen2, Liang He1,3, Zhou Yu4 1East China Normal University 2Xiaoi Research, Xiaoi Robot Technology Co., Ltd 3Shanghai Key Laboratory of Multidimensional Information Processing 4University of California, Davis [email protected], {yanyang,lhe}@cs.ecnu.edu.cn [email protected], [email protected] Abstract Dialogue state tracker is responsible for inferring user intentions through dialogue history. Previous methods have difficulties in handling dialogues with long interaction context, due to the excessive information. We propose a Dialogue State Tracker with Slot Attention and Slot Information Sharing (SAS) to reduce redundant information’s interference and improve long dialogue context tracking. Specially, we first apply a Slot Attention to learn a set of slot-specific features from the original dialogue and then integrate them using a Slot Information Sharing. The sharing improve the models ability to deduce value from related slots. Our model yields a significantly improved performance compared to previous state-of-the-art models on the MultiWOZ dataset. 1 Introduction The recent global adoption of personal assistants such as Alexa and Siri made dialogue system a more popular topic in research. The major difference between dialogue systems and questionanswering is that dialogue systems need to track dialogue history effectively. So, we normally use a dialogue state tracking component to track user’s intention throughout the conversation. A dialogue state is typically composed as a set of slot value pairs in a task-oriented dialogue, such as “hotelinternet-yes”. It means the slot “hotel-internet” has a value of “yes”. Early dialogue state tracking model needs a predefined ontology which means the values of every slot are enumerated in advance (Henderson et al., 2014; Mrkˇsi´c et al., 2017; Zhong et al., 2018; Sharma et al., 2019). Such practice is inefficient and costly. The large number of possible slot-value pairs makes deploying these models in the real-life ∗*Corresponding author. applications difficult (Rastogi et al., 2017). This difficulty is further amplified in multi-domain dialogue state tracking where the dialogues have more than one tasks. Because the manual effort grows exponentially with the complexity of the dialogues. In (Wu et al., 2019), Wu et al. introduced a transferable dialogue state generator (TRADE), which can generate dialogue states from utterances using a copy mechanism. This generative model achieved relative good performance, but it still has trouble in extracting relevant information from the original dialogues. For example, a user may tell the agent that he/she needs a taxi in a turn, but the taxi’s departure location is implicitly mentioned several turns ago. Inspired by the (Chen et al., 2017; Chen, 2018), (Chen et al., 2019) studied on utilizing attention mechanism to deal with the long distance slot carryover problem. In their work, they first fused the information of the slot, its corresponding value and the dialogue distance into a single vector. Then they computed the attention between this single vector and the concatenation of dialogue and intent information. We simplify the attention method and introduce it into the dialogue state tracking task. Moreover, it is a common sense that there is some kind of relevance between two slots involving the same domain or the same attribute. For example, people tend to have a meal near the attraction they visit, so slot “attraction-area” and slot “restaurant-area” have the same value at most times. For these slots with a common or related value, if a slot never or seldom appears in the training set, sharing the learned feature of data-sufficient slot may benefit the model’s tracking ability on these rare or unknown slots. So we propose SAS, a new multi-domain dialogue state tracking model to resolve this issue to some extent. To be specific, we use an Slot Attention to localize the key features from the original information-excessive dialogue and a Slot Infor6367 mation Sharing to improve the models ability to deduce value from related slots. The processed information provided by the slot attention and the sharing module makes the generator more sensitive to the location of the values in the dialogue history and thus generates correct slot values. Experiments on the multi-domain MultiWOZ dataset (Budzianowski et al., 2018) shows SAS can achieve 51.03% joint goal accuracy and outperform previous state-of-the-art model by 2.41%. On the single domain dataset which only contains the restaurant domain, we achieve 67.34% joint goal accuracy, outperforming prior best by 1.99%. In addition, we conduct an analysis of the experimental results to evaluate the quality of values generated by our model. 2 Related Work The early research of DST focused on the pipelined approach which involves a special module named Spoken Language Understanding (SLU) before the DST module (Wang and Lemon, 2013; Williams, 2014; Perez and Liu, 2017). But obviously, it was not reasonable to train SLU and DST respectively since the accumulated error in SLU may be passed to the DST. In order to alleviates this problem, later study focuses on the joint training methods (Henderson et al., 2014; Zilka and Jurcicek, 2015; Wen et al., 2017). Although the higher performance shows the effectiveness of models without SLU, there still remains some shortcomings. For example, these models typically rely on semantic dictionaries which list the potential rephrasings for all slots and values in advance. Make such a list is costly. Fortunately, the recent development of deep learning and representation learning helps the DST to get rid of this problem. (Mrkˇsi´c et al., 2017) proposed a novel Neural Belief Tracking (NBT) framework which was able to learn distributed representations of dialogue context over pre-trained word vectors, while (Dernoncourt et al., 2017) described a novel tracking method which used elaborate string matching and coreference resolution to detect values explicitly presented in the utterance. These models greatly improve the performance of DST, but they are not good at handling rare and unknown slot value pairs which seldom or never appear in the training set. There were many efforts to exploit general features between rare slot value pairs and common ones. (Zhong et al., 2018) proposed GLAD, a model which built global modules to share parameters between estimators for different slots and local modules to learn slot-specific features. (Nouri and Hosseini-Asl, 2018) improved GLAD by reducing the latency in training and inference time, while preserving its powerful performance of state tracking. But as the dialogues become increasingly complex, the performance of these models on multi-domain is not as satisfying as on single domain. Because of the dependency on the dialogue ontology, they have difficulty in scaling up with domains. Once the number of domains increases, the amount of slot value pairs will boom. With the copy mechanism, the sequence-to-sequence model TRADE (Wu et al., 2019) successfully got rid of any predefined slot value pairs and generated dialogue states from conversation utterances. But we find there still remain several crucial limitations which have not been well solved on multidomain dialogues. First, these models rely on the long dialogue history to identify the values which belong to various domains and slots. Sometimes the information contained in the dialogue history is too rich for these models to efficiently utilize and the redundant information tends to interfere with their value identification or value generation. Second, the related information among similar slots is wasted. To alleviate these problems, a slot attention and a slot information sharing module are suggested. The former can isolate the most valuable information for each slot, while the latter integrates information kept by its all similar slots and improve the models ability to deduce value from related slots. 3 Task Definition The dialogue state tracking models take the interaction context as input and extract slot value pairs explicitly or implicitly presented in conversations. The combinations of these slot value pairs are the representations of the user’s goal. In this paper, we denote X = {(u1, r1), · · · , (uT , rT )} as the dialogue history, where u1, · · · , uT and r1, · · · , rT are respectively the set of user utterances and the set of system responses in T turns. The dialogue state of turn t is marked as ST t= (slot: sj, value: yvalue j ). Here, sj indicates the j-th slot, while yvalue j means the ground turth value sequence for this slot. All the slots in ontology are obtained by preprocessing the original MultiWOZ dataset with the delexicalization. Moreover, we extend the def6368 … * … Vocabulary list Slot Information Sharing Slot Attention History Context Slot Similarity Matrix ࡹ … Slot ૚ Slot ૛ Slot J ࢉ ࢏࢔࢚ history hidden states slot hidden states ࢉ૚ࢉ૛ ࢉࡶ Final Distribution Slot name Slot type Decoder slot hidden states history hidden states … Encoder Figure 1: SAS model’s architecture. This model consists of four parts: an encoder, a slot attention, a slot information sharing and a decoder. inition of the slot to include the domain name for convenience. For instance, a slot in this paper will be “hotel-star”, rather than “star”. Our primary goal is to learn a generative dialogue state tracker model M : X × O →ST that can efficiently capture the user’s intentions for dialogues including multiple domains. And unlike most of the previous models, the ontology O mentioned in this paper only contains the predefined slots and excludes their values. 4 Our Proposed Model Figure 1 shows the architecture of SAS. SAS is a sequence-to-sequence model augmented with slot attention and slot information sharing. Slot attention enables better feature representation and slot information sharing helps understanding less-seen slots. We describe the details of every component in SAS as follows: 4.1 Encoder We use a 1-layer bidirectional gated recurrent unit (GRU) (Chung et al., 2014) to encode the dialogue history. As TRADE (Wu et al., 2019), our input to the model is the concatenation of all words in the recent l-turn dialogue history Xt = [ut−l+1, rt−l+1, · · · , ut, rt] ∈R|Xt|×demb, where demb means the embedding size. First, each word in the dialogue history X is mapped to a distributed embedding vector. Then, a GRU is utilized to obtain the hidden state corresponding to each word in the text and we denote these hidden state as the history hidden states Ht = {henc 1 , henc 2 , · · · , henc |Xt|} ∈ R|Xt|×dhdd. 4.2 Slot Attention To isolate key features from the noisy dialogue history, we build the slot attention. In fact, the multidomain dialogues are usually complex and contain rich features. This challenges the model’s ability to cope with the excessively rich information. To be specific, in one dialogue, user can mention various information, such as wanting to book a restaurant for a meal and then planning to see an attraction after the meal by ordering a taxi. There are in total 10 slots mentioned spanning across restaurant, attraction and taxi domains. Information from one domain maybe not useful for other domain and can even cause confusion. For example, both restaurant and taxi mention time and people. So we propose the slot attention to only extract useful history information to every slot. More concretely, for a particular slot sj, we first encode its slot name into slot hidden states SHj = [shenc j1 , · · · , shenc j|N|], where |N| is the maximum size of the slot name phrase. Since the last hidden state shenc j|N| provided by the GRU contains the context information of the entire phrase, we pick it as the representation of slot sj. After that, we calculate the attention between the slot information, shenc j|N| and the hidden states of the dialogue history Ht = [henc 1 , · · · , henc |Xt|] to obtain the context vector cj: 6369 aj = (henc)⊤shenc j|N| (1) scj i = exp(aj i) |Xt| i=1 exp(aj i) (2) cj = |Xt|  i=1 scj ihenc i (3) Here, the score scj t indicates the relevance between info slots sj and dialogue history. The context vector cj ∈Rdhdd denotes the slot-specific information grabbed from the entire dialogue history. Finally, we obtain the context vectors c = [c1, c2, · · · , cJ] ∈Rdhdd×J for all J slots. 4.3 Slot Information Sharing In the slot information sharing, there is a special matrix called the slot similarity matrix. This matrix controls the information flow among similar slots. We introduce two sharing methods according to their different calculation of the slot similarity matrix: fix combination sharing and the k-means sharing. We will compare the effectiveness of the two methods in Section 6. 4.3.1 Fix Combination Method We calculate the similarity between every two slots to construct switch matrix. We first compute the cosine similarity over the two slot names and then calculate the similarity over the slot types. Specifically, the slot types can be divided into several categories such as “date”, “location”. For example, if there are two slots “restaurant-area” and “restaurant-book day”, then the similarity in the first part may be high since the two slot names share a common word “restaurant”, while the similarity in the second part is quite low: slot “restaurantarea” has a value whose type is “location”, and “restaurant-book day” has a value which belongs to “date”. Next, the two calculated similarities sname and vtype will be integrated with a hyperparameter α ∈[0, 1] and we can get a special matrix sim ∈RJ×J as a result. sim = α · sname + (1 −α) · vtype (4) Here, the integration ratio α actually controls the final similarity of the slots. In Table 2, we show that different choices of this ratio will impact the model’s tracking performance. After that, matrix sim is transformed into the slot similarity matrix M by the mask mechanism. Mij =  1 if simij ≥β 0 if simij < β (5) Here, hyperparameter β acts as a threshold to decide whether the two slots are similar enough to trigger the sharing switch and open the information path between them. 4.3.2 K-means Sharing Method Since the fix combination method needs manual efforts to search for the best hyperparameter, we propose another method, K-means Sharing Method, which requires no hyperparameter tuning and can achieve an averagely good performance. In this sharing method, we also compute the slot name similarity snameij and the value type similarity vtypeij between slot si and sj as the way in the fix combination one. Then we put vectors (snameij, vtypeij) onto flat space and divide these vectors into two groups by the k-means clustering algorithm. One group stands for the slot si and sj are similar enough, while the other one not similar. The element in Mij is 1 if they are in similar group, 0 if they are in unsimilar group. After getting the slot similarity matrix whose value is either 1 or 0, we do the matrix multiplication between the context vectors c = [c1, c2, · · · , cJ] ∈Rdhdd×J and the slot similarity matrix M ∈RJ×J. Then we get the integrated vectors int = [int1, int2, · · · , intJ] ∈Rdhdd×J. These new vectors keep more expressive information for every slot. Specifically, intj is calculated as following: intj = J  i=1 ci · Mij, Mij ∈{0, 1} (6) As shown in the above equation, intj is essentially the integrated result of all related context vectors ci in c and the integration is guided by the slot similarity matrix M. The matrix M actually plays the role of a switch which controls the information flow between slots and provides a selective integration. For example, this integration makes the data-insufficient slot “attraction-type” receive the information from its related and data-sufficient slot “attraction-name”, and helps our model deduce the related value for data-insufficient slots. 6370 4.4 Decoder The value prediction process of our decoder can be divided into two steps: first, predicting whether the value of a certain slot is constrained by the user; and then extracting the value if the constraint is mentioned in the dialogue. In the first step, a three-way classifier called slot gate is used and it can map a vector taken from the encoded hidden states Ht to a probability distribution over “ptr”, “none”, and “dontcare” labels. Once the slot gate predicts “ptr”, the decoder will fill the slots with the values extracted from the dialogues. Otherwise, it just fills the slots with “not-mentioned” or “does not care”. In the second step, another GRU is utilized as the decoder. During the decoding step of the jth slot, given a sequence of word embeddings [wj1, wj2, · · · , wj|N|], the GRU transforms them into decoded hidden states [hdec j1 , hdec j2 , · · · , hdec j|N|] with the slot’s integrated vector intj: zj k = σ(Uz1wjk + Uz2hdec jk−1) (7) rj k = σ(Ur1wjk + Ur2hdec jk−1) (8) ˜hj k = tanh(U1wjk + U2(rj k ◦hdec jk−1)) (9) hdec jk = (1 −zj k) ◦hdec jk−1 + zj k ◦˜hj k (10) Here, |N| is the length of the slot sequence and intj is the initial hidden state input hdec j0 . The integrated vector intj makes the decoded hidden states contain more information about the dialogue history. So they are more sensitive about whether the value of slot j is mentioned in the dialogue and where it locates. With the decoded hidden state hdec jk , the generator computes P gen jk , the probability of the value generated from the vocabulary list E ∈R|V |×dhdd and P copy jk , the one copied from the interaction history. |V | is the vocabulary size and dhdd is the dimension of the hidden state. In the end, we sum the probability P gen jk and P copy jk to yield the final prediction Pjk: P gen jk = Softmax(E · (hdec jk ) ⊤) (11) P copy jk = Softmax(Ht · (hdec jk ) ⊤) (12) Pjk = gjk × P gen jk + (1 −gjk) × P copy jk (13) gjk = Sigmoid(Wg · [hdec jk ; wjk; P copy jk · Ht]) (14) Here, gjk is a scalar which controls the model behaviour. It determines whether to generate values from the vocabulary list or copy words from the historical context. 5 Experiments In this section, we first introduce the dataset and the evaluation metrics. We then describe our model’s implementation details. Finally, we show our baseline models. 5.1 Datasets and Metrics MultiWOZ (Budzianowski et al., 2018) is a fullylabelled collection of human-human written conversations spanning over multiple domains and topics. There are 7,032 multi-domain dialogues consisting of 2-5 domains in MultiWOZ. Because these dialogues have multiple tasks, so the long dialogue history makes state tracking more difficult. Since there are no dialogues from hospital and police domains in validation and testing sets of MultiWOZ, we follow TRADE (Wu et al., 2019) and use five out of the seven domains to train, valid and test, including restaurant, hotel, attraction, taxi and train. These domains involve 30 slots. We also test our model on a subset of MultiWOZ which only contains the dialogues from the restaurant domain to verify whether our model still works for single-task dialogues. We evaluate all the models using two metrics, slot accuracy and joint goal accuracy, similar to (Nouri and Hosseini-Asl, 2018): • Slot accuracy. We use slot accuracy to check whether each single slot in the ground truth dialogue states is correct. The metric only focuses on if the slot requested is correct or not. • Joint goal accuracy. The joint goal accuracy is used to evaluate whether the user’s goal in each turn is captured. Only when every slot in the ground-truth dialogue state is considered and has correct value, can we consider the joint goal is achieved. It is the most important metric in the dialogue state tracking task. 5.2 Implementation Details We use the concatenation embedding of GloVe embedding (Pennington et al., 2014) and the characterwise embedding (Hashimoto et al., 2017) in the 6371 MultiWOZ MultiWOZ(res) Model Joint Slot Joint Slot MDBT 15.57 89.53 17.98 54.99 GLAD 35.57 95.44 53.23 96.54 GCE 36.27 98.42 60.93 95.85 SpanPtr 30.28 93.85 49.12 87.89 TRADE 48.62 96.92 65.35 93.28 SAS 51.03 97.20 67.34 93.83 Table 1: Performances of various models on MultiWOZ dataset and MultiWOZ (restaurant) dataset. Model Joint Slot SAS-att-shr 55.52 92.66 SAS-shr 60.68 89.53 SAS(RT shr-0.7, 0.8) 60.59 96.92 SAS(RT shr-0.8, 0.7) 60.78 96.94 SAS(RT shr-0.8, 0.8) 61.04 97.02 SAS(RT shr-0.8, 0.9) 60.54 96.91 SAS(RT shr-0.9, 0.8) 61.47 97.00 SAS(KM shr) 60.92 96.96 SAS(HM shr) 60.28 96.89 Table 2: Results evaluated on the MultiWOZ(except hotel) dataset. “RT shr” means the fix combination sharing method, “KM shr” is the k-means sharing method, and “HM shr” is the human evaluated sharing method. The two numbers after “-” represents the integration ratio α and the threshold β respectively. experiment. The model is trained with ADAM optimizer (Kingma and Ba, 2014) and a batch size of 32. Both the encoder and the decoder use 400 hidden dimensions. The learning rate is initially set to 0.001, but once the joint goal accuracy does not rise with the training, the network will automatically decrease its learning rate to improve the performance. We apply dropout with 0.2 dropout rate for regularization (Srivastava et al., 2014). Besides that, a word dropout technique is also utilized in the way proposed by (Bowman et al., 2015) which simulates the out-of-vocabulary setting. Our k-means clustering algorithm is implemented with the sklearn module, and we set all the hyperparameter in k-means algorithm as default. 5.3 Baseline Methods We compare SAS with several previous methods: MDBT, GLAD, GCE, SpanPtr and TRADE. Based on the classical NBT model, MDBT (Ramadan et al., 2018) extended the task into multiple domains. MDBT makes full use of the semantic similarities between the dialogue and the slot ontology to track the domain and the value of the slot jointly. GLAD relies on global modules to learn the general information and local modules to catch the slotspecific information (Zhong et al., 2018) from the dialogues. GCE efficiently improves and simplifies GLAD, while keeping the excellent performance of GLAD (Nouri and Hosseini-Asl, 2018). SpanPtr first introduces the pointer network (Vinyals et al., 2015) into the dialogue state tracking task to extract unknown slot values (Xu and Hu, 2018). And in that paper, they also apply an effective dropout technique for training. TRADE directly generates slot values from the dialogues by using the copy mechanism and gets rid of the predefined value list (Wu et al., 2019). It achieves the previous state-ofthe-art performance. We use the fix combination version of SAS in Table 1 with the integration ratio α of 0.8 and the threshold β of 0.8. That’s the best hyperparameters we find for MultiWOZ. 6 Results In this section, we first show the result of our model on MultiWoZ dataset, then on MultiWoZ(restaurant) and MultiWOZ (except hotel) dataset. After conducting the ablation experiment, we also display the improvement the slot attention and slot information sharing brings. Our model achieves the best performance in the most important metric, joint goal accuracy. Our model outperformed the previous state-of-the-art model, TRADE by 2.41% absolute score on joint goal accuracy. We only observe slight increase of slot accuracy compared to TRADE. We suspect it is because TRADE was already achieving nearly 97% accuracy, which is close to the up-bound of the slot accuracy in this task. After carefully checking the error cases, we found these errors mainly come from the difficulty of generating name phrases. To test SAS’s ability on single domain dialogue tasks, we also evaluate our model on the a subset of MultiWOZ which contains only the restaurant search task. As displayed in Table 1, SAS achieved 1.99% improvement over TRADE on the joint goal accuracy as well, suggesting SAS’s good performance generalize to single domain task. Table 2 also shows how different choices of the hyperparameters influence the final results. On MultiWOZ, the integration ratio of 0.8 and the threshold of 0.8 are the best hyperparamters. But as 6372 illustrated in Table 2, the best integration ratio is no longer 0.8 on MultiWOZ (except hotel). The best values of the integration ratio and the threshold will vary with the ontology. We also perform ablation study to quantify different modules’ contribution. We observe in Table 3 that adding the slot attention improves the state tracking results by 1.37% on MultiWOZ. Such improvement suggests having slot attention that focuses on the key information of the history is useful. And the slot information sharing further enhances the performance by 1.04%. The reason behind this may be that the information sharing of the related slots makes the data-insufficient slot receive more information. This handles the rare or unknown slotvalue problems to some extent. As illustrated in Table 3, a model with the fix combination sharing method performs better than the k-means sharing method. But the fix combination method has an obvious shortcoming. It is difficult to generalize to new ontology. We need search the hyperparameters for every new ontology and these efforts are usually costly and time-consuming. Results in Table 2 and Table 3 indicate that the k-means algorithm provides a more robust model with respect to different parameters. MultiWOZ MultiWOZ(res) Model Joint Slot Joint Slot SAS-att-shr 48.62 96.92 65.35 93.28 SAS-shr 49.99 97.10 66.89 93.62 SAS(RT shr) 51.03 97.20 67.34 93.83 SAS(KM shr) 50.46 97.15 66.65 93.78 SAS(HM shr) 50.27 97.13 66.89 93.62 Table 3: Performances of the models with different components on MultiWOZ dataset and MultiWOZ (restaurant) dataset. RT shr, KM shr, HM shr indicate the model is using the fix combination sharing method, k-means sharing method, and the human evaluated sharing method in the slot information sharing respectively. To investigate whether the slot similarity matrices used by the two sharing methods really reflect the similarity among slots, we also compare them with a human constructed similarity matrix. We invite three volunteers to carefully rate (1 or 0) the relationship between every two slots and obtain the slot similarity matrix used in the human evaluated method. As shown in Table 2 and Table 3, the performance of the k-means sharing method is close to the one the human constructed method. This indicates human knowledge cannot further improve this task. Besides that, we also notice that the fix combination model usually outperforms the human constructed method, demonstrating that the fix combination model can automatically discover some hidden relationship among all slots that human cannot capture. 7 Error Analysis To better understand why our model improves the performance, we investigated some dialogue examples and shown them in Table 4. In the first dialogue, by asking “Could you also find me a hotel with a moderate price that offers internet?”, the user has briefly informed the agent that he/she is looking for a hotel “with internet”. The previous model missed the “hotel-internet” in the tracked slots. Because the model is mislead by the long interaction history. Our model learns to focus on important information using the slot attention to track the correct internet slot. In the second dialogue, although the previous model manages to capture the value “21:30”. It still confused “arriveby” with “leaveat”. While SAS can distinguish them. We suspect this is because our model can learn the differences between these slots by training on isolated key features per slot without seeing any irrelevant information. In the third example, the user agrees to visit an attraction named “Christ’s College” from many college-type choices the agent suggests. Previous model fetches a wrong message and fills the slot “attraction-name” with “Clare College”. In contrast, SAS captures the correct attraction name and also deduces that the attraction type is college. Similar to the first dialogue, the slot attention helps model gain more clean information to detect slot values more accurately. And by sharing the information fetched from slot “attraction-name” with the slot “attraction-type”, our model is more sensitive with the value “college”. We also investigate the limitation of our model by analyzing the state tracking errors. We noticed two types of errors. First, SAS can not effectively identify value “dontcare” for most slots. For example, when the agent asks the user about his/her requirement on the hotel rating, though he/she answers “that is not really important for me”, the model fails to fill “dontcare” into the slot “hotelstar”. We believe this is due to the fact that the meaning of “dontcare” has plenty of expressions, it is much harder for the model to learn the semantic 6373 No Model Context 1 I am looking for a train that leaves on saturday and arrives by 10:30. // Where are you traveling to and from? // · · · // Yes, that train sounds good. Please book it for me. Could you also find me a hotel with a moderate price that offers internet? // · · · // The north part of town please, preferably in a guesthouse. True ‘hotel-area-north’, ‘train-day-saturday’, ‘hotel-internet-yes’, · · · , ‘hotel-pricerange-moderate’, ‘hotel-type-guest house’ TRADE ‘train-arriveby-10:30’, ‘train-day-saturday’, ‘train-departure-birmingham new street’, ‘train-destination-cambridge’, ‘hotel-pricerange-moderate’, ‘hotel-type-guest house’ SAS ‘train-destination-cambridge’, ‘train-departure-birmingham new street’, · · · , ’hotel-internet-yes’, ’train-arriveby-10:30’ 2 I am looking for a Chinese restaurant in the centre of town. // · · · // All Saints Church is famous for its architecture. It’s located on Jesus Lane, cb58bs. They can be reached at 01223452587. Is there anything else I can find for you? // Yes. I need a taxi to take me from the church to the restaurant at 21:30. True ‘restaurant-food-chinese’, ‘attraction-area-centre’, · · · , ‘taxi-departure-all saints church’, ‘restaurant-area-centre’, ‘taxi-leaveat-21:30’ TRADE ‘restaurant-food-chinese’, ‘attraction-area-centre’, · · · , ‘taxi-arriveby-21:30’, ‘taxi-departure-all saints church’, ‘restaurant-area-centre’, ‘restaurant-pricerange-dontcare’, ‘taxi-leaveat-21:30’ SAS ‘taxi-destination-all saints church’, ‘restaurant-pricerange-dontcare’, ‘attraction-area-centre’, ‘taxi-leaveat-21:30’, ‘restaurant-food-chinese’, ‘taxi-departure-all saints church’, ‘attraction-name-all saints church’, ‘restaurant-area-centre’ 3 I would like to get some information about colleges to visit? // There is Christs College, Churchill College, Clare College , Clare Hall, Corpus Christi, Downing College, Emmanuel College, and Huges Hall. Would you like me to list more? // · · · // Tr6359 leaves at 13:40 and arrives 16:23, will this 1 work for you ? // Yes i need 6 tickets. True ‘attraction-type-college’, ‘train-departure-birmingham new street’, · · · , ‘attraction-name-christ s college, ‘train-book people-6’, ‘train-day-friday’, ‘train-arriveby-16:30’ TRADE ‘attraction-name-clare college’, ‘train-departure-birmingham new street’, ‘train-destination-cambridge’, ‘train-book people-6’, ’train-day-friday’, ‘train-arriveby-16:30’ SAS ‘train-destination-cambridge’, ‘train-departure-birmingham new street’, ‘attraction-name-christ s college’, ‘train-book people-6’, · · · , ‘attraction-type-college’ Table 4: Example dialogue state outputs from TRADE and SAS. “True” stands for ground truth dialogue states, “TRADE” and “SAS” are the generation results from TRADE and SAS respectively. of “dontcare” than other slots. Besides that, we also notice that the tracking errors of departure or destination location are still common. The reason may be that location name words are usually rich in variations and have few grammatical feature. 8 Conclusions and Future Work We present SAS, an effective DST model which successfully extracts the key feature from the original information excessive dialogue. The slot attention of SAS enables it to isolate the key information for each slot, while the slot information sharing enhances the expressiveness of the infor6374 mation passed to each slot by integrating the information from similar slots. The sharing allows SAS to generalize on rare slot-value pairs with few training data. Our model reaches the state-of-the-art performance compared with previous models. We believe that SAS provides promising potential extensions, such as adapting our model on other tasks where are troubled by excessive information. Besides that, we also notice that it is hard for SAS to correctly extract names of hotel or attraction which have rich variations. Designing a new model to address these problems may be our future work. Acknowledgement This research is funded by the Science and Technology Commission of Shanghai Municipality (19511120200 & 18511105502) and by Xiaoi Research. The computation is performed in ECNU Multifunctional Platform for Innovation (001). References Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2015. Generating sentences from a continuous space. arXiv preprint, arXiv:1511.06349. Version 4. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I˜nigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gasic. 2018. Multiwoz-a largescale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026. Danqi Chen. 2018. Neural reading comprehension and beyond. Ph.D. thesis, Stanford University. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. arXiv preprint, arXiv:1704.00051. Version 2. Tongfei Chen, Chetan Naik, Hua He, Pushpendre Rastogi, and Lambert Mathias. 2019. Improving long distance slot carryover in spoken dialogue systems. arXiv preprint, arXiv:1906.01149. Version 1. Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint, arXiv:1412.3555. Version 1. Franck Dernoncourt, Ji Young Lee, Trung H Bui, and Hung H Bui. 2017. Robust dialog state tracking for large ontologies. In Dialogues with Social Robots, pages 475–485. Springer. Kazuma Hashimoto, Yoshimasa Tsuruoka, Richard Socher, et al. 2017. A joint many-task model: Growing a neural network for multiple nlp tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1923– 1933. Matthew Henderson, Blaise Thomson, and Steve Young. 2014. Word-based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 292– 299. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint, arXiv:1412.6980. Version 9. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1777–1788. Elnaz Nouri and Ehsan Hosseini-Asl. 2018. Toward scalable neural dialogue state tracking model. arXiv preprint, arXiv:1812.00899. Version 1. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Julien Perez and Fei Liu. 2017. Dialog state tracking, a machine reading approach using memory network. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 305–314. Osman Ramadan, Paweł Budzianowski, and Milica Gasic. 2018. Large-scale multi-domain belief tracking with knowledge sharing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 432–437. Abhinav Rastogi, Dilek Hakkani-T¨ur, and Larry Heck. 2017. Scalable multi-domain dialogue state tracking. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 561– 568. IEEE. Sanuj Sharma, Prafulla Kumar Choubey, and Ruihong Huang. 2019. Improving dialogue state tracking by discerning the relevant context. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 576–581. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958. 6375 Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700. Zhuoran Wang and Oliver Lemon. 2013. A simple and generic belief tracking mechanism for the dialog state tracking challenge: On the believability of observed information. In Proceedings of the SIGDIAL 2013 Conference, pages 423–432. TH Wen, D Vandyke, N Mrkˇs´ıc, M Gaˇs´ıc, LM RojasBarahona, PH Su, S Ultes, and S Young. 2017. A network-based end-to-end trainable task-oriented dialogue system. In 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017-Proceedings of Conference, volume 1, pages 438–449. Jason D Williams. 2014. Web-style ranking and slu combination for dialog state tracking. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 282–291. Chien-Sheng Wu, Andrea Madotto, Ehsan HosseiniAsl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state generator for task-oriented dialogue systems. arXiv preprint, arXiv:1905.08743. Version 1. Puyang Xu and Qi Hu. 2018. An end-to-end approach for handling unknown slot values in dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1448–1457. Victor Zhong, Caiming Xiong, and Richard Socher. 2018. Global-locally self-attentive encoder for dialogue state tracking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1458– 1467. Lukas Zilka and Filip Jurcicek. 2015. Incremental lstmbased dialog state tracker. In 2015 Ieee Workshop on Automatic Speech Recognition and Understanding (Asru), pages 757–762. IEEE.
2020
567
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6376–6385 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6376 Speaker Sensitive Response Evaluation Model JinYeong Bak School of Computing KAIST [email protected] Alice Oh School of Computing KAIST [email protected] Abstract Automatic evaluation of open-domain dialogue response generation is very challenging because there are many appropriate responses for a given context. Existing evaluation models merely compare the generated response with the ground truth response and rate many of the appropriate responses as inappropriate if they deviate from the ground truth. One approach to resolve this problem is to consider the similarity of the generated response with the conversational context. In this paper, we propose an automatic evaluation model based on that idea and learn the model parameters from an unlabeled conversation corpus. Our approach considers the speakers in defining the different levels of similar context. We use a Twitter conversation corpus that contains many speakers and conversations to test our evaluation model. Experiments show that our model outperforms the other existing evaluation metrics in terms of high correlation with human annotation scores. We also show that our model trained on Twitter can be applied to movie dialogues without any additional training. We provide our code and the learned parameters so that they can be used for automatic evaluation of dialogue response generation models. 1 Introduction Evaluating the system generated responses for open-domain dialogue is a difficult task. There are many possible appropriate responses given a dialogue context, and automatic metrics such as BLEU (Papineni et al., 2002) or ROUGE (Lin, 2004) rate the responses that deviate from the ground truth as inappropriate. Still, it is important to develop and use an automatic metric because human annotation is very costly. In addition to BLEU and ROUGE, there is a widely-used evaluation metric based on the distributed word representation (Liu et al., 2016), but this metric shows low correlations with human judgments. One reason for the difficulty in developing an automatic metric that correlates well with human judgements is that the range of appropriate responses for a given context is very wide. Table 1 shows an example of a conversation between Speaker A and B. While there is a ground truth response “Yeah let’s go to the theater,” A could have also said “That sounds good! Have you seen Thor?” or “Good. What movie?” Note that based on word overlap with the ground truth, these two responses would receive low scores. Responses labeled N#, such as “The weather is no good for walking” are not appropriate. As the Table shows, the existing metrics from BLEU to RUBER are not able to tell apart these appropriate A# responses from the inapproriate N# responses. Some recent metrics such as ADEM (Lowe et al., 2017) and RUBER (Tao et al., 2018) compute the similarity between a context and a generated response. However, ADEM requires humanannotated scores to train and thus cannot be applied to new datasets and domains. RUBER overcomes this limitation by using the idea that a random response should be used as a “negative sample”, but it is not able to distinguish the responses in the example in Table 1, because it uses only one random sample which does not provide sufficient information about appropriate and inappropriate responses. In this paper, we propose Speaker Sensitive Responses Evaluation Model (SSREM) that analyzes the appropriateness of the responses. We use speaker sensitive responses that are generated by one speaker to train the model. We test SSREM in comparison with other evaluation metrics. First, we make annotated human scores for responses in Twitter conversation data. The evaluation scores of SSREM shows a higher correlation with human scores than other evaluation metrics. And SSREM outperforms other metrics in terms of identifying the ground truth responses given a context. We 6377 Context A: What do you want to do tonight? B: Why don’t we go see a movie? Ground truth response A: Yeah Let’s go to the theater Utterance BLEU ROUGE EMB RUBER SSREM Human A1 That sounds good! Have you seen Thor? 0.00 (3) 0.00 (3) 0.95 (2) 0.59 (2) 0.64 (1) 5.00 (1) A2 Good, What movie? 0.00 (3) 0.00 (3) 0.92 (4) 0.55 (4) 0.62 (2) 5.00 (1) A3 Or hang out in city 0.00 (3) 0.00 (3) 0.89 (6) 0.48 (5) 0.49 (3) 3.80 (3) N1 The weather is no good for walking 0.32 (1) 0.15 (2) 0.94 (3) 0.47 (6) 0.44 (4) 2.60 (4) N2 The sight is extra beautiful here 0.32 (1) 0.17 (1) 0.97 (1) 0.64 (1) 0.38 (5) 1.00 (5) N3 Enjoy your concert 0.00 (3) 0.00 (3) 0.91 (5) 0.57 (3) 0.33 (6) 1.00 (5) Table 1: Example of appropriate responses (A1 - A3) and non-appropriate responses (N1 - N3) for a given context and ground truth response, and the responses’ scores by evaluation metrics. Emb is embedding average and Human is average scores from five people. Ranks are shown in brackets. SSREM has positive correlation with human scores. show the additional advantage of SSREM: it can be applied to evaluate a new corpus in a different domain. We train SSREM on Twitter corpus and test it on a corpus of movie reviews, and we show that SSREM outperforms other metrics in terms of the correlation with human scores and the task of identifying the ground truth response. Our contributions in this paper include the following. • We present SSREM, a new response evaluation model trained with speaker sensitive negative samples (Sec 3). • We conduct experiments on a Twitter conversation corpus and show that SSREM outperforms the others (Sec 5 and 6). We further show the applicability of SSREM with Movie dialogue corpus that are not using in the training (Sec 7). • We provide our code and the learned parameters of SSREM which can be used for evaluation of generated responses1. 2 Related Work In this section, we describe existing automatic evaluation metrics for dialogue response generation and discuss their limitations. For task-oriented dialogue models such as airline travel information system (Tur et al., 2010), completing the given task is most important, and the evaluation metrics reflect that (Hastie, 2012; Bordes et al., 2017). But open-domain conversation models do not have specific assigned tasks; the main goal of an open-domain conversation model 1https://github.com/NoSyu/SSREM is generating appropriate responses given a conversation about any topic. Existing automatic evaluation metrics compare a generated response and the ground truth response. The most widely-used metric are BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) based on the overlap of words between the two responses. A limitation of these word overlap-based metrics is that they cannot identify the synonyms, and to overcome this limitation, the embedding-based metrics use distributed word vector representations (Liu et al., 2016). However, these metrics have poor correlation with human judgments (Liu et al., 2016; Novikova et al., 2017; Gupta et al., 2019) because they still only look at the similarity between the generated response and the ground truth. SSREM is a model with the awareness that a response can be different from the ground truth response but still appropriate for the conversation context. The responses for a casual conversation can be varied. For example, there are four appropriate responses including ground truth response for a given context in Table 1. Some previous approaches suggest considering the context together with the response such as ADEM (Lowe et al., 2017) and RUBER (Tao et al., 2018). ADEM uses pre-trained VHRED (Serban et al., 2017) to encode the texts and compute the score by mixing similarities among the context, generated response and a ground truth. One limitation of ADEM is that it requires human annotated scores to learn the model. Human labeling is cost-intensive, so it is impractical to apply to a new dataset or domain. RUBER uses negative sampling to overcome this issue, but it uses only one random negative sample against one positive sample which is not ideal (Gutmann and Hyv¨arinen, 2010). SSREM does not require 6378 A B C A: I like sports. What kind of sports? :B A: Soccer! :) :) A: I’ve seen this match. Was it good? :B A: Yeah It was great. :) B: I am preparing the concert Enjoy your concert :C B: Thanks a lot A: What do we do tonight? How about movie? :C A: Yeah Let’s go. :) :) 𝑆𝑆𝐶𝐶𝐴𝐴 (1) 𝑆𝑆𝑃𝑃𝐴𝐴 (1) 𝑆𝑆𝑆𝑆𝐴𝐴 𝑅𝑅𝑅𝑅𝑛𝑛𝑛𝑛𝐴𝐴 (1) 𝑆𝑆𝐶𝐶𝐴𝐴 (2) 𝑆𝑆𝐶𝐶𝐴𝐴 (3) 𝑆𝑆𝑃𝑃𝐴𝐴 (2) Figure 1: Example of utterance sets for speaker A. SC stands for ‘same conversation’, SP for ‘same partner’, SS for ‘same speaker’, and Rand for ‘random’. SC SP SS Rand .922±1e-4 .919±2e-4 .912±3e-4 .898±2e-3 Table 2: Mean similarity among utterances in SC, SP, SS and Rand sets with a 95% confidence interval human scores to learn the model and uses many speaker sensitive negative samples. 3 Speaker Sensitive Response Evaluation Model This section describes our Speaker Sensitive Response Evaluation Model (SSREM) that trains with speaker sensitive utterance samples. SSREM looks at a given context and its ground truth response together to evaluate a generated response. We describe the motivation of SSREM with empirical observations in section 3.1. We present the structure of SSREM in section 3.2. With the motivation, we present a training method of SSREM with speaker sensitive utterance samples in section 3.3. 3.1 Motivation We are motivated by the assumption that there is varying degree of similarity among utterances in a corpus of conversations containing many speakers and conversations. 1. If we pick a set of random utterances from the corpus, they will not be very similar. 2. If we pick a set of utterances from a single speaker conversing with multiple partners, those utterances will be more similar than the random utterances in 1. 3. If we pick a set of utterances from conversations between a single dyad, even if the conversations are far apart in time, those utterances would be more similar than those in 2. 4. If we pick a set of utterances in a single conversation session, they are the most similar, even more so than those in 3. To test these assumptions, we first categorize one speaker A’s utterances into four types of sets corresponding to the assumptions above. • Random (RandA): Random utterances from speakers who are not A • Same Speaker (SSA): Speaker A’s utterances • Same Partner (SPA): A’s utterances in conversations with the same partner B • Same Conversation (SCA): A’s utterances in a single conversation Figure 1 shows one example of the sets. We make three SCA sets because A participates in three conversations. We make two SPA sets because A has conversations with B and C. SSA is all utterances from A so we create one set of utterances for A. Finally, RandA is random utterances from non-A’s utterances. We create five sets for each speaker. From these sets, we compute the similarity among utterances in a set. First, we convert an utterance into a vector by averaging the words in the utterance with GloVe Twitter 200d (Pennington et al., 2014). And we compute the similarity of the vectors by Frobenius norm. Finally, we calculate the mean similarity of each set with a 95% confidence interval. Table 2 shows the results. Rand has the lowest similarity mean value, so it supports the first assumption. SS has higher similarity mean value than Rand. It supports the second assumption. The mean similarity value of SP is higher than SS. It supports the third assumption. Finally, SC has the highest mean similarity value. It also supports the last assumption. From the observations, we assume that utterances are clustered by the speakers and addressees. 3.2 SSREM SSREM evaluates a generated response ˆr from a context c and a ground truth response r. The output of SSREM is as follows: SSREM(c, r, ˆr) = h(f(c, ˆr), g(ˆr, r)) (1) 6379 where f(c, ˆr) = tanh(V (c)T MV (ˆr)) is a parametrized function to measure the similarity between the context c and the generated response ˆr. V is a function to convert a sequence of words to a vector. M is a matrix that weights of the similarity between two vectors. It is the parameter of the f function. g(r, ˆr) is another function to measure the ground-truth response and the generated one. h is a function to mix the values of f and g functions. To normalize each output of the f and g functions, we adopt linear scaling to unit range (Aksoy and Haralick, 2001) which rescale the value x as follows: ˜x = x −l u −l (2) where u is an maximum and l is minimum of x. SSREM is similar to RUBER, which computes the similarities among c, r and ˆr separately and merge it at the end. However, SSREM uses speaker sensitive samples, whereas RUBER takes one positive sample and one negative sample. 3.3 Training with Speaker Sensitive Samples SSREM has a parametrized function f that takes context c and a generated response ˆr. To train the f function, we define a classification problem to identify the ground truth response r from a set of candidate responses Rcand. The Rcand has the ground truth response and some negative samples. A classifier tries to identify the ground truth response with the negative samples. Negative samples are usually selected from the uniform distribution. But we sample the speaker sensitive utterances which described in section 3.1 for SSREM. Formally speaking, let A be the speaker of the ground truth response rA. It means it is A’s turn to say the response for the context c. The candidate response set RcandA is given by RcandA = {rA, scA, spA, ssA, randA} (3) where scA ∈SCA \ c, spA ∈SPA \ c, ssA ∈ SSA \ c and randA ∈RandA are the negative samples from speaker sensitive responses. Then, the probability of a ground truth response rA given context c and RcandA is as follows: p(rA∣c, RcandA) = exp(f(c, rA)) ∑r′∈RcandA exp(f(c, r′)) (4) We maximize this probability among all contextground truth response pair. So the loss function of the classification problem is −∑ c log exp(f(c, rA)) ∑r′∈RcandA exp(f(c, r′)) (5) This approach is similar to learning the sentence representations (Logeswaran and Lee, 2018), but we use the speaker sensitive negative samples. It is also similar to Noise Contrastive Estimation (NCE) (Gutmann and Hyv¨arinen, 2010; Mnih and Teh, 2012). But we set the noise distribution to speaker sensitive distribution and only take the data sample term in the objective function of the NCE. Selecting negative samples is important for learning. When we choose the noise distribution, it would be close to the data distribution, because otherwise, the classification problem might be too easy to learn the data (Gutmann and Hyv¨arinen, 2010). Mnih and Teh (2012) shows that using samples from the unigram distribution outperforms using samples from a naive uniform distribution for learning a neural probabilistic language model. Likewise, we create negative samples from the speaker sensitive utterances. scA is more similar to the rA than any other negative samples. We show the patterns by empirical observations in section 3.1 and experimental results in section 6.2. These speaker sensitive samples make the classification problem harder and lead to learning the function f better than using the naive uniform distributed random samples. To train SSREM, we need a conversation corpus that has many conversations from one speaker. We choose the Twitter conversation corpus (Bak and Oh, 2019) as it has 770K conversations with 27K Twitter users. We split the data as 80/10/10 for training/validation/test. 4 Annotating Human Scores To measure the correlation SSREM with human judgments, we first gather human judgments of responses given a conversation context. We use Amazon Mechanical Turk (MTurk) to annotate the scores of the responses. We select 300 conversations from a dataset of Twitter conversations. And we generate responses for annotation using three conversation models and the ground truth response for each conversation. • Retrieval model (Pandey et al., 2018): A 6380 Human Score 1 2 3 4 5 Twitter 211 258 342 278 71 Movie 279 267 311 217 126 Table 3: Basic statistics of human scores of the responses on Twitter conversation and Movie scripts BM25 retrieval model (Robertson et al., 2009) that uses TF-IDF vector space. • VHCR (Park et al., 2018): A variational autoencoder model that has a global variable for a conversation. • VHUCM (Bak and Oh, 2019): A variational autoencoder model that considers the speakers of a conversation. Then we ask two questions to the MTurkers. (1) How appropriate is the response overall? (2) How on-topic is the response? These questions are used in (Lowe et al., 2017). The authors show that these questions have high inter-annotator agreement among workers. They suggest using the first question to annotate the human score, and so we follow the suggestion. But we ask the second question to workers to filter out workers who submit random answers. Each worker answers these questions on a five-point Likert scale. We annotate 1,200 responses in total. One worker answers ten conversations, four responses per conversation for a total of 40 responses. Each response is tagged by five workers for a total of 287 workers of which we retain the responses from 150 workers who passed all the tests. We tag the most selected score as the human score for each response. The inter-annotator Fleiss’ kappa (Fleiss, 1971) is κ = 0.61 which is consistent with the results in (Lowe et al., 2017). Table 3 shows the basic statistics of the annotations. 5 Experiment 1 - Comparing with Human Scores This section describes the experiment that looks at the correlation between the model scores and the human scores for given contexts and responses. 5.1 Experiment Setup We use a Twitter conversation corpus (Bak and Oh, 2019) to train and validate SSREM and other baseline models. For the test, we remove the ground truth responses in human-annotated corpus since it always produces the maximum score on BLEU and ROUGE. We compare SSREM with the following response evaluation methods: • BLEU (Papineni et al., 2002): We compute the sentence-level BLEU score with the smoothing seven technique (Chen and Cherry, 2014). • ROUGE (Lin, 2004): We compute the F score of ROUGE-L. • EMB (Liu et al., 2016): We compute the average cosine similarity between ground truth response and test response in a word embedding2. We use pre-trained Google news word embedding (Mikolov et al., 2013) to avoid the dependency between the training data and embedding. • RUBER (Tao et al., 2018): We train with a random negative sample to train unreferenced metric in RUBER. And we use arithmetic averaging to hybrid the referenced and unreferenced metrics. • RSREM: We use the same structure of SSREM, but train with uniformly random negative samples, not speaker sensitive samples. We choose functions in SSREM for the experiment. For V function, We use the word averaging technique that averages the vectors of words in the sequence. We can use advanced methods such as RNN or sentence embeddings (Reimers and Gurevych, 2019). But for the fair comparisons with RUBER, we select a similar approach. We use GloVe Twitter 200d word embedding (Pennington et al., 2014). For g function, we use sentence mover‘s similarity that is the state of the art evaluating reference-candidate pair of sentences by using word and sentence embeddings (Clark et al., 2019). To avoid dependency between the training data and embedding, we use Elmo embedding (Peters et al., 2018). For h function, we use arithmetic averaging that shows good results in (Tao et al., 2018). 5.2 Results and Discussion Table 4 shows the Spearman and Pearson correlations between human scores and models scores. First, BLEU, ROUGE, and EMB are not correlated 2We experimented with the greedy and extreme embedding for comparison, but these methods were not better than the average embedding. 6381 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0 (a) BLEU, coeff: 0.005 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0 (b) ROUGE, coeff: 0.005 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0 (c) EMB, coeff: 0.003 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0 (d) RUBER, coeff: 0.006 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0 (e) RSREM, coeff: 0.013 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0 (f) SSREM, coeff: 0.058 Figure 2: Scatter plots that show model scores against human scores. We add Gaussian noise drawn from N(0, 0.3) to the human scores to better visualize the density of points (Lowe et al., 2017). The red line is a linear regression line, and the coeff is the coefficient of the line. SSREM shows a higher positive correlation with human judgment than the other models. Metric Spearman Pearson BLEU 0.024 (0.472) 0.041 (0.227) ROUGE 0.024 (0.471) 0.052 (0.124) EMB 0.006 (0.861) 0.012 (0.720) RUBER 0.044 (0.192) 0.046 (0.177) RSREM 0.088 (< 0.01) 0.101 (< 0.01) SSREM 0.392 (< 0.001) 0.376 (< 0.001) Table 4: Correlation between human and model scores. We compute Spearman and Pearson correlation coefficients. p-values are shown in brackets. SSREM shows higher correlation with human judgement than the other models. with human scores. It means evaluating responses with ground truth only is not useful. These results are the same in previous research (Liu et al., 2016; Lowe et al., 2017; Tao et al., 2018). RUBER shows a higher correlation with human scores than other baselines but has a high p-value that means low statistically significant. RSREM performs better than RUBER and other baselines. It shows using multiple negative samples improves the performance of learning the model. Finally, SSREM outperforms all other methods for two correlations with low pvalues. It shows the effectiveness of using speaker sensitive negative samples. Figure 2 shows scatterplots of the human and model scores. A dot is one response, and a red line is a linear regression line. The x-axis is the human score, and the y-axis is each automatic evaluation metric. To visualize the dots better, we adopt the technique from (Lowe et al., 2017) that adds random number (N(0, 0.3)) to x-axis value. But, we train the linear regression with original scores. First, BLEU and ROUGE have many zero values since there are few overlapped words between the generated response and the ground-truth response. The dots in EMB that uses word embedding to overcome the limitation are more distributed. But there are few relationships with human scores, and the linear regression coefficient is flattened. RUBER is better than BLEU, ROUGE, and EMB. RSREM that uses more negative samples shows better than RUBER. Finally, SSREM shows a higher positive correlation with human scores than other baselines. 6 Experiment 2 - Identifying True and False Responses The second experiment presents the performance of f function in SSREM by comparing it with baselines. RUBER, RSREM, and SSREM compute the score from the context of the conversation and generated responses. To investigate the performance of the score, we set up the task that identifies the true and false responses for a given context. The 6382 RUBER RSREM SSREM 0.40 0.45 0.50 0.55 0.60 0.65 GT SC SP SS Rand Figure 3: Difference of scores on various responses in Twitter conversation corpus. The range of the vertical error bar is a 95% confidence interval of the values among the responses. SSREM outperforms the other models for identifying true and false responses. true responses are ground-truth responses, and false ones are four negative samples that are described in section 3.3. 6.1 Experiment Setup The data for this experiment is the test data of the Twitter conversation corpus. We extract contexts, true and false responses from the data. The true response is the ground-truth response (GT). And the false responses are four types that are described in section 3.3 (SC, SP, SS, Rand). We compare SSREM with RUBER and RSREM that compute the similarity between a context and a response. We take the unreferenced metric score in RUBER. And we take the output of the f function in RSREM and SSREM. We use the same trained models in section 5. 6.2 Results and Discussion Figure 3 shows the results. The x-axis is the models, and the y-axis is the output of the unreferenced metric or f function. All models perform well on distinguishing between GT utterances and Rand utterances. But RUBER performs poor on identifying SC, SP, and SS. And RSREM cannot identify false responses from SC. Finally, SSREM outperforms the other two models for identifying all cases. It also maximizes the difference between GT and Rand than the other two models. It is another clue for showing the effectiveness of using speaker sensitive negative samples. One interesting result is that the output scores decrease from GT to Rand. It is the same observation about the differences of speaker sensitive utterances in section 3.1. And it also means that identifying GT and SC is a harder problem than GT and Rand pair. It is another evidence for why we use speaker sensitive negative samples, as we discussed in section 3.3. SC consists of negative samples that are most difficult for the model to distinguish, so it makes sense to consider only SC negative samples. But we include SP and SS for the following two reasons. First, there are only a limited number of SC utterances because they must all come from the same conversation, whereas we need a pretty large number of negative samples to effectively train the model (Mnih and Teh, 2012). Second, we also sample from SP and SS because they represent different degree of similarity to the context utterances. SC utterances are from the same conversation, leading to decreased model generalization. 7 Experiment 3 - Applying New Corpus In this section, we investigate the applicability of SSREM to a new conversation corpus. SSREM takes the speaker sensitive samples from Twitter. But there are many open-domain conversation corpora such as Movie scripts (Danescu-NiculescuMizil and Lee, 2011). Tao et al. (2018) run a similar experiment with RUBER, but they use the similar domain of data, Chinese online forum (Training from Douban and testing on Baidu Tieba). We choose the Movie scripts corpus because it is written by the script writers whereas Twitter is personal causal online conversations. We present the performance of SSREM on the new corpus. 7.1 Experiment Setup First, we annotate 1,200 responses to the movie dialog corpus. We use HRED (Sordoni et al., 2015) rather than VHUCM. The next procedure of annotation is the same when we create human scores for Twitter conversation responses in section 4. Two hundred forty-four workers tagged all responses. But, 94 workers failed the attention check question, so we collect the 150 workers’ answers. The interannotator Fleiss’ kappa (Fleiss, 1971) for Movie is κ = 0.63. It is still consistent with the results in (Lowe et al., 2017) and annotated Twitter conversations. The bottom row in Table 3 shows the basic statistics of the annotated responses. We run two experiments, comparing with human scores and identifying true and false responses. We use the same models in section 5. We use the Twitter conversation corpus to train RUBER, RSREM, and SSREM. And we test the models on annotated movie dialogs. Unlike the Twitter conversation cor6383 Metric Spearman Pearson BLEU 0.036 (0.378) 0.063 (0.124) ROUGE 0.041 (0.322) 0.054 (0.191) EMB 0.022 (0.586) 0.010 (0.815) RUBER 0.004 (0.920) -0.009 (0.817) RSREM 0.009 (0.817) 0.024 (0.550) SSSREM 0.132 (< 0.001) 0.119 (< 0.005) Table 5: Correlation between human and model scores with Movie corpus. We compute Spearman and Pearson correlation coefficient. p-values are shown in brackets. SSREM shows higher correlation with human judgement than the other models. pus, the movie dialogs have a short length of conversations. So we choose SC and Ran only to run the second experiment. 7.2 Results and Discussion In the experiment on comparing with human scores on the movie dialogs corpus, Table 5 shows the results. First, BLEU, ROUGE, and EMB are not correlated with human scores. RUBER shows worse performance than testing on the Twitter corpus. RSREM performs better than RUBER and other baselines, but it also shows worse performance than testing on the Twitter corpus. Finally, SSREM outperforms all other methods for two correlations with low p-values. It shows the effectiveness of using speaker sensitive negative samples for the new corpus. Figure 2 shows the similar results by plotting scatter plots. In the experiment on identifying true and false responses with the movie dialogs corpus, Figure 5 shows the results of the identification task. RUBER performs poor on distinguishing between GT and Rand statistically significantly. RSREM performs better than RUBER. And SSREM outperforms the other two models for identifying all cases in the new corpus. 8 Conclusion and Future Work In this paper, we presented SSREM, an automatic evaluation model for conversational response generation. SSREM looks at the context of the conversation and the ground-truth response together. We proposed negative sampling with speaker sensitive samples to train SSREM. We showed that SSREM outperforms the other metrics including RSREM that uses random negative samples only. We also showed that SSREM is effective in evaluating a movie conversation corpus even when it is trained with Twitter conversations. There are several future directions to improve SSREM. First, we can make SSREM more robust on adversarial attacks. Sai et al. (2019) shows limitations of ADEM on adversarial attacks such as removing stopwords and replacing words with synonyms. We investigated another type of the adversarial attack named copy mechanism that copies one of the utterances in the context as the generated response. All existing automatic evaluation methods including RUBER that compare the context and the response can be cheated by the copy mechanism. SSREM is also susceptible. However, SSREM is fooled less than other existing models because SSREM learns with negative samples from the set of utterances in the same conversation. SSREM learns to differentiate among utterances in the same context. We show this empirically with an experiment to identify true and false responses (Sec 6.2). When we look at the mean score for the context utterances that shows this copy mechanism compared to the mean score of the ground-truth response (GT), the mean score of context utterances is 0.07 higher by RUBER, but only 0.01 higher by SSREM. SSREM does not give lower scores for the context utterances than GT, but it is not as bad as RUBER. We will make SSREM more robust on the attacks. Second, we can improve SSREM for a higher correlation with human judgement. We chose to approach SSREM with a classification loss because it is simple and widely used to estimate the models using negative sampling. Although the classification loss is simple, SSREM outperforms all existing automatic evaluation models. However, as Table 2 and Figure 3 are shown, each negative samples has different correlation with the context. We will use ranking loss (Wang et al., 2014; Schroff et al., 2015) to learn the difference among samples. Recently, Zhang et al. (2020) uses BERT (Devlin et al., 2019) to evaluate generated candidate sentences by comparing reference sentence. We used word embeddings to represent an utterance to the vector for the simplicity, but contextual embeddings are much better since it generates more context-related representation than word embeddings. We will use the contextual embedding to represent utterances. Third, we can extend using SSREM to various conversation corpora such as task-oriented dialogues. We trained and tested SSREM on open6384 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0 (a) BLEU, coeff: 0.007 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0 (b) ROUGE, coeff: 0.005 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0 (c) EMB, coeff: 0.002 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0 (d) RUBER, coeff: −0.001 1 2 3 4 5 Human 0.0 0.2 0.4 0.6 0.8 1.0 RSREM RSREM scores against Human scores (e) RSREM, coeff: 0.008 1 2 3 4 5 Human 0.0 0.2 0.4 0.6 0.8 1.0 SSREM SSREM scores against Human scores (f) SSREM, coeff: 0.047 Figure 4: Scatter plot showing model against human scores with Movie corpus. We add Gaussian noise drawn from N(0, 0.3) to the human scores to better visualize the density of points which is similar to (Lowe et al., 2017). RUBER RSREM SSREM 0.40 0.45 0.50 0.55 0.60 0.65 0.70 GT SC Rand Figure 5: Difference of scores on various responses in Movie corpus. The range of the vertical error bar is a 95% confidence interval of the values among the responses. SSREM outperforms the other models for identifying true and false responses. domain conversation corpora. However, contextual coherence between the input context and the generated text is important in multi-turn conversations. We will apply SSREM to various conversation tasks for evaluating the generated text automatically. We will explore these directions in our future work. Acknowledgments We would like to thank Jeongmin Byun3 for building the annotation webpage, and the anonymous reviewers for helpful questions and comments. This work was supported by Institute for Information & 3https://jmbyun.github.io communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2017-0-01779, A machine learning and statistical inference framework for explainable artificial intelligence). References Selim Aksoy and Robert M Haralick. 2001. Feature normalization and likelihood-based similarity measures for image retrieval. Pattern recognition letters. JinYeong Bak and Alice Oh. 2019. Variational hierarchical user-based conversation model. In Proceedings of the EMNLP-IJCNLP. Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. In Proceedings of the ICLR. Boxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing techniques for sentencelevel bleu. In Proceedings of the Ninth Workshop on Statistical Machine Translation. Elizabeth Clark, Asli Celikyilmaz, and Noah A. Smith. 2019. Sentence mover’s similarity: Automatic evaluation for multi-sentence texts. In Proceedings of the ACL. Cristian Danescu-Niculescu-Mizil and Lillian Lee. 2011. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the 2nd Workshop on Cognitive Modeling and Computational Linguistics. 6385 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the NAACL. Joseph L Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological bulletin. Prakhar Gupta, Shikib Mehri, Tiancheng Zhao, Amy Pavel, Maxine Eskenazi, and Jeffrey Bigham. 2019. Investigating evaluation of open-domain dialogue systems with human generated multiple references. In Proceedings of the SIGdial. Michael Gutmann and Aapo Hyv¨arinen. 2010. Noisecontrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the AISTATS. Helen Hastie. 2012. Metrics and evaluation of spoken dialogue systems. In Data-Driven Methods for Adaptive Spoken Dialogue Systems. Springer. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the EMNLP. Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. In Proceedings of the ICLR. Ryan Lowe, Michael Noseworthy, Iulian Vlad Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic Turing test: Learning to evaluate dialogue responses. In Proceedings of the ACL. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the NIPS. Andriy Mnih and Yee Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. In Proceedings of the ICML. Jekaterina Novikova, Ondˇrej Duˇsek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for NLG. In Proceedings of the EMNLP. Gaurav Pandey, Danish Contractor, Vineet Kumar, and Sachindra Joshi. 2018. Exemplar encoder-decoder for neural conversation generation. In Proceedings of the ACL. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the ACL. Yookoon Park, Jaemin Cho, and Gunhee Kim. 2018. A hierarchical latent structure for variational conversation modeling. In Proceedings of the NAACL. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the EMNLP. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the NAACL. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the EMNLP-IJCNLP. Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval. Ananya B. Sai, Mithun Das Gupta, Mitesh M. Khapra, and Mukundhan Srinivasan. 2019. Re-evaluating adem: A deeper look at scoring dialogue responses. F. Schroff, D. Kalenichenko, and J. Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the CVPR. Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Proceedings of the AAAI. Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and JianYun Nie. 2015. A hierarchical recurrent encoderdecoder for generative context-aware query suggestion. In Proceedings of the CIKM. Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2018. Ruber: An unsupervised method for automatic evaluation of open-domain dialog systems. In Proceedings of the AAAI. Gokhan Tur, Dilek Hakkani-T¨ur, and Larry Heck. 2010. What is left to be understood in atis? In Proceedings of the IEEE Spoken Language Technology Workshop. J. Wang, Y. Song, T. Leung, C. Rosenberg, J. Wang, J. Philbin, B. Chen, and Y. Wu. 2014. Learning finegrained image similarity with deep ranking. In Proceedings of the CVPR. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In Proceedings of the ICLR.
2020
568
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6386–6395 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6386 A Top-Down Neural Architecture towards Text-Level Parsing of Discourse Rhetorical Structure Longyin Zhang1,2, Yuqing Xing1,2, Fang Kong1,2⇤, Peifeng Li1,2, Guodong Zhou1,2 1. Institute of Artificial Intelligence, Soochow University, China 2. School of Computer Science and Technology, Soochow University, China {lyzhang9,yqxing}@stu.suda.edu.cn {kongfang,pfli,gdzhou}@suda.edu.cn Abstract Due to its great importance in deep natural language understanding and various down-stream applications, text-level parsing of discourse rhetorical structure (DRS) has been drawing more and more attention in recent years. However, all the previous studies on text-level discourse parsing adopt bottom-up approaches, which much limit the DRS determination on local information and fail to well benefit from global information of the overall discourse. In this paper, we justify from both computational and perceptive points-of-view that the top-down architecture is more suitable for textlevel DRS parsing. On the basis, we propose a top-down neural architecture toward text-level DRS parsing. In particular, we cast discourse parsing as a recursive split point ranking task, where a split point is classified to different levels according to its rank and the elementary discourse units (EDUs) associated with it are arranged accordingly. In this way, we can determine the complete DRS as a hierarchical tree structure via an encoder-decoder with an internal stack. Experimentation on both the English RST-DT corpus and the Chinese CDTB corpus shows the great effectiveness of our proposed top-down approach towards textlevel DRS parsing. 1 Introduction Text-level parsing of discourse rhetorical structure (DRS) aims to identify the overall discourse structure and the rhetorical relations between discourse units in a text. As a fundamental research topic in natural language processing, text-level DRS parsing plays an important role in text understanding and can benefit various down-stream applications, such as document summarization (Goyal and Eisenstein, 2016), sentiment analysis (Choi et al., 2016), text categorization (Ji and Smith, 2017), pronoun ⇤Corresponding author Commentary (SN) Overall-branch (SN) Coordinating (NN) e7 e6 Coordinating (NS) e5 e4 Purpose (SN) Coordinating (NN) e3 e2 e1 e1: 西œˆLËËÔÅ⇤t·7”Ñ,/ Bank of Tibetan actively readjusts credit structure e2: Ân›úg⇢生ßIÕπß⇢Ñïe,/ Ensuring the investment of key industries such as husbandry production e3: †'˘Â⇢、˝ê、§⇢、⇢·I˙æÑc8 D—õîœ⇥/ Increase the normal supply of funds for industrial, energy, transportation, communications e4: ªt∞û7>A€π€一øC,/ Last year, the newly increased loan was 1.441 billion yuan e5: ‘⌦tû†kø⇢C⇥/ an increase of more than 800 million yuan compared to the previous year. e6: úg⇢生ß7>(⇧Ïv+7> ‘⌦t∞û€ π køC;/ The loans (including aid the poor loan) for agricultural and livestock production newly increased by 438 million yuan compared to the previous year e7: aG企⇢7>ûE:~⌃KmA一πk ⇥/ The increase in loans to township enterprises was 61.83% Figure 1: An example for DRS parsing, where the text consists of 3 sentences containing 7 EDUs. resolution (Sheng et al., 2017) and event temporal relation identification (Dai et al., 2019). According to Rhetorical Structure Theory (RST) (Mann and Thompson, 1988), a text can be presented by a hierarchical tree structure known as a Discourse Tree(DT). Figure 1 illustrates an excerpt with its gold standard DRS from article chtb 0005 in the Chinese CDTB (Connectivedriven Discourse Treebank) corpus (Li et al., 2014c). We can find that, in the DT, each leaf node corresponds to an elementary discourse unit (EDU), and various EDUs are recursively 6387 combined into high level larger discourse units in a bottom-up fashion. In this example, 7 EDUs are connected by 6 rhetorical relations, while in each non-terminal node, the rhetorical relation and the nuclearity type are labeled. Correspondingly, text-level DRS parsing consists of three components, i.e., bare DRS generation (hierarchical span determination), rhetorical nuclearity determination and rhetorical relation classification. During the past decade, text-level DRS parsing has been drawing more and more attention and achieved certain success (Hernault et al., 2010; Joty et al., 2013; Feng and Hirst, 2014; Ji and Eisenstein, 2014; Heilman and Sagae, 2015; Li et al., 2016; Braud et al., 2017; Yu et al., 2018). However, all the previous studies on text-level DRS parsing adopt bottom-up approaches. That is, adjacent EDUs are recursively combined into high-level larger text spans by rhetorical relations to form a final discourse tree in a bottom-up way. In this paper, we justify that compared with a bottom-up approach, a top-down approach may be more suitable for textlevel DRS parsing from two points-of-view, • From the computational view, only local information (i.e., the constructed DRS subtrees and their context) can be naturally employed to determine the upper layer structure in the bottom-up fashion. Due to the overwhelming ambiguities at the discourse level, global information, such as the macro topic or structure of the discourse, should be well exploited to restrict the final DRS, so as to play its important role. From the computational view, a top-down approach can make better use of global information. • From the perceptive view, when people read an article or prepare a manuscript, they normally go from coarse to fine, from general to specific. That is, people tend to first have a general sense of the theme of the article, and then go deep to understand the details. Normally, the organization of the article is much limited by its theme. For text-level DRS parsing, a top-down approach can better grasp the overall DRS of a text and conform to the human perception process. Additionally, just noted as Li et al. (2014c), they employed a top-down strategy in the Chinese CDTB annotation practice. That is, a top-down approach is consistent with the annotation practice of a DRS corpus. In this paper, we propose a topdown neural architecture to text-level DRS parsing. In particular, we cast top-down text-level DRS parsing as a recursive split point ranking task, where various EDUs associated with split points are arranged in different levels according to the rank of the split point. In this way, we can determine the complete DRS as a hierarchical tree structure via an encoder-decoder with an internal stack. It is worthwhile to mention that, at each time step, we use the Biaffine Attention mechanism (Dozat and Manning, 2017) to compute the attention vector and determine the next split point, along with the corresponding nuclearity and relation jointly. 2 Related Work In the literature, previous studies on text-level discourse parsing can be classified into two categories, probabilistic CKY-like approaches (Hernault et al., 2010; Joty et al., 2013; Feng and Hirst, 2014; Li et al., 2014a, 2016) and transition-based approaches (Li et al., 2014b; Ji and Eisenstein, 2014; Heilman and Sagae, 2015; Wang et al., 2017; Braud et al., 2017; Yu et al., 2018). Probabilistic CKY-like approaches normally exploit various kinds of lexical, syntactic and semantic features to compute the probability of the relation between the EDUs, and select the two EDUs with the highest relational probability to merge into one text span. In this way, the final discourse tree is generated. Recently, various deep learning models are employed to capture hidden information to compute the relational probability, e.g. recursive deep models (Li et al., 2014a), and attention-based hierarchical neural network models (Li et al., 2016). As an alternative, transition-based approaches employ the dependency structure to directly represent the relations between EDUs. Li et al. (2014b) first build a discourse dependency treebank by converting the RST-DT corpus and then apply graph based dependency parsing techniques to discourse parsing. Ji et al. (2014) propose a shift-reduce discourse parser using a representation learning approach to achieve the state-of-the-art performance. Wang et al. (2017) propose a pipelined two-stage parsing approach. First, a transition-based model is employed to parse a bare discourse tree. Then, an independent relation labeller is adopted to determine discourse relations. Braud et al. (2017) present two variants of transition-based discourse parsing using a feedforward neural network model. Yu et al. (2018) build a transition based RST parser with implicit syntactic features. In particular, the information of 6388 sentence boundaries and paragraph boundaries is embedded as additional features. It is worthwhile to emphasize that, all the above studies on text-level discourse parsing employ the bottom-up approaches. So far, only Lin et al. (2019) and Liu et al. (2019) make the preliminary explorations on constructing sentence-level DTs in a top-down fashion. Lin et al. (2019) proposed a framework for both the EDU segmenter and the sentence-level discourse parser uniformly. Following the work of Lin et al. (2019), Liu et al. (2019) proposed hierarchical pointer network for better dependency and sentence-level discourse parsing. However, both studies consider merely sentencelevel discourse parsing. While it is simple but effective to encode entire sentence sequentially, entire text-level discourse larger than sentence, such as paragraph and document, is obviously much more complicated. Statistics on the RST-DT corpus show each sentence only contains 2.5 EDUs on average while each document contains 55.6 EDUs on average. The representation for large text span can impact the parsing performance very much. In this paper, we present a top-down neural architecture to text-level discourse rhetorical structure parsing. Different from Lin et al. (2019) and Liu et al. (2019), we propose a hierarchical discourse encoder to better present the text span using both EDUs and split points. Benefiting from effective representation for large text spans, our text-level discourse parser achieves competitive or even better results than those best reported discourse parsers either neural or non-neural with hand-engineered features. 3 Top-down Neural Architecture Our top-down neural architecture consists of three parts, i.e., EDU Encoder, Split Point Encoder and Attention-based Encoder-Decoder. Among them, the EDU encoder and the split point encoder are responsible for representing the EDUs and the split points, respectively. Different from Lin et al. (2019) and Liu et al. (2019), we combine the representation of both EDUs and split points hierarchically to better represent the text span rather than only using the representation of the last EDU as the representation of the text span. In this way, the global information can be exploited for our text-level discourse parsing. In the following, we take Figure 1 as the example to illustrate the architecture. 西藏 Tibetan 银行 Bank ... , NR NN ... PU Word Embeddings POS Embeddings + ࢎࢋ૚ Figure 2: Architecture of the EDU encoder. 3.1 EDU Encoder Figure 2 shows the procedure of the EDU Encoder. For a given discourse D = {E1, . . . , EN}, where N means the number of EDUs, Ek is the kth EDU. The EDU encoder is responsible for encoding each EDU. For 8Ek 2 D, Ek = {w1, w2, . . . , wn}, where wi means the ith word of Ek and n is the number of words, we first concatenate the word embedding and the POS embedding for each word. Then, the combined vectors are fed into the bi-directional GRU network (Cho et al., 2014). The output of the ith word is hi, and the last states of BiGRU in both directions are denoted as h~s and h ~ s (i.e., h~s = h~n, h ~ s = h ~ 1). Considering the different importance of each word in a given EDU, we employ a self-attention mechanism to calculate the weight of each word. Eq 1 shows the weight calculation formula, where we take the dot product of a learnable vector q and hi as the weight of the ith word in the EDU. wi = qT hi P qT hj (1) In this way, we can achieve the encoding hek of the kth EDU in given discourse D. hek = h~s h ~ s # + X wihi (2) 3.2 Split Point Encoder In this paper, we call the split position between any two EDUs the split point. A discourse containing n EDUs has n −1 split points. For example, Figure 1 contains 7 EDUs and 6 split points. The split point encoder is responsible for encoding each split point. In our model, we use the both EDUs on the left and right sides of the split point to compute the split point representation. 6389 ࢎ′ࢋ૚ ࢎ′ࢋ૛ ࢎ′ࢋૠ ݖ݁ݎ݋ ݌ܽ݀݀݅݊݃ ࢎࢋ૚ + + + … … … … … ࢎࢋ૛ ࢎࢋૠ ࢎ࢙૙ ࢎ࢙૚ ࢎ࢙ૠ ݖ݁ݎ݋ ݌ܽ݀݀݅݊݃ Figure 3: Architecture of the split point encoder. After encoding each EDU using the EDU encoder, we can get the sequence of encoded EDUs he = {he1, . . . , heN}, which are further fed into a bi-directional GRU network to get the final sequence of encoded EDUs h0 e = {h0 e1, . . . , h0 eN}. For the convenience of calculation, we first add two additional zero vectors on the start and end of the EDU sequence as stubs. Then, we use a convolutional network to compute the final split point representation. Here, the width of the convolution kernel is set to 2, and the Rectified Linear Unit (ReLU) activation function is employed to map the input h0 e = {h0 e0, h0 e1, . . . , h0 eN, h0 e(N+1)} to the output hs = {hs0, hs1, . . . , hsN}. Figure 3 takes the example as shown in Figure 1 to demonstrate the working procedure of the split point encoder. The input is the achieved 7 EDU encoding results during the EDU encoder stage, i.e., the vector sequence {he1 . . . he7}. The output is the 8 split point representation vectors {hs0 . . . hs7}, where, the first and last vectors are just stubs and the remaining 6 vectors are meaningful outputs for following stages. 3.3 Attention-based Encoder-Decoder on Split Point Ranking After achieving the representation of each split point, an encoder-decoder with an internal stack is employed to rank the split points and indirectly get the predicted discourse parse tree. Figure 4 shows the complete encoder-decoder framework, where the left part shows the encoder. Here, the achieved split point representation vectors hs = {hs0, hs1, . . . , hsN} are fed into a bidirectional GRU network to get the output hse = {hse0, hse1, . . . , hseN}. At the same time, the combination of the last states of the bi-directional GRU network in both directions are taken as the initial state of the decoder. During the decoder stage, a Figure 3: Architecture of the split point encoder. he = {he1, . . . , heN}, which are further fed into a bi-directional GRU network to get the final sequence of encoded EDUs h0 e = {h0 e1, . . . , h0 eN}. For the convenience of calculation, we first add two additional zero vectors on the start and end of the EDU sequence as stubs. Then, we use a convolutional network to compute the final split point representation. Here, the width of the convolution kernel is set to 2, and the Rectified Linear Unit (ReLU) activation function is employed to map the input h0 e = {h0 e0, h0 e1, . . . , h0 eN, h0 e(N+1)} to the output hs = {hs0, hs1, . . . , hsN}. Figure 3 takes the example as shown in Figure 1 to demonstrate the working procedure of the split point encoder. The input is the achieved 7 EDU encoding results during the EDU encoder stage, i.e., the vector sequence {he1 . . . he7}. The output is the 8 split point representation vectors {hs0 . . . hs7}, where, the first and last vectors are just stubs and the remaining 6 vectors are meaningful outputs for following stages. 3.3 Attention-based Encoder-Decoder on Split Point Ranking After achieving the representation of each split point, an encoder-decoder with an internal stack is employed to rank the split points and indirectly get the predicted discourse parse tree. Figure 4 shows the complete encoder-decoder framework, where the left part shows the encoder. Here, the achieved split point representation vectors hs = {hs0, hs1, . . . , hsN} are fed into a bidirectional GRU network to get the output hse = {hse0, hse1, . . . , hseN}. At the same time, the combination of the last states of the bi-directional GRU network in both directions are taken as the initial state of the decoder. During the decoder stage, a uni-directional GRU network with an internal stack is employed for our discourse parser. Initially, the (0,7) (3,7) (0,3) (3,7) (1,3) (3,7) (5,7) (3,5) (5,7) (0,7) (3,7) (0,3) (3,7) (1,3) (3,7) (5,7) (3,5) (5,7) hs0 hs hs hs hs hs hs hs Figure 4: A parsing example of the attention-based encoder-decoder. uni-directional GRU network with an internal stack is employed for our discourse parser. Initially, the stack contains only one element, i.e., the index pair of the first and the last split points of the complete discourse (0, N). At each decoding step, the index pair of the boundary split points is first popped from the top of the stack. Suppose the index pair is (l, r) at the jth step. Then, the encoding output hsel and hser are concatenated to form the input of the decoder. While the decoder output at the jth step represented by hdj. After that, we adopt the Biaffine Attention mechanism to the encoder output corresponding to the split points between the boundary split points (i.e., hsem, 8m, l m r) and the decoder output hdj. Finally, the split point with the largest score is selected as the final result of this time. If there are still unselected split points for the new text spans formed by this decision, they are pushed onto the stack for following steps. Figure 4 shows the parsing steps of the example shown in Figure 1. Here, the arrows in red indicate the selected split points at each time step. hse0 and hse7 represent the start and end points of the given discourse, and do not participate in the split point selection during decoding. In particular, the stack is first initialized with containing only one element (0, 7). That is, all EDUs form a complete text span at the very beginning, and we feed the concatenated vector [he0; he7] into the decoder to achieve the output hd1. Then, the weight is computed using hd1 and the results of the encoder corresponding to the 6 split points between the number 0 and the number 7, i.e., hse1 . . . hse6. In this example, since the split point 3 has the largest weight, the text span is split into two parts, i.e., (0, 3) and (3, 7). Because there are still unselected split points in the text span (0, 3) and (3, 7), we push them onto the stack. In this way, we get one split point at each 6390 step. After six iterations, the complete discourse rhetorical tree is built. 3.4 Biaffine Attention on Text-level DRS Parsing After achieving the split point representation, we adopt the Biaffine Attention mechanism to determine the split point, nuclearity and discourse relation jointly. Since applying smaller multi-layer perceptrons (MLPs) to the recurrent output states before the biaffine classifier has the advantage of stripping away information not relevant to the current decision, we first employ a one-layer perceptron to the output vectors of the encoder hsei and the decoder hdj with ReLU as its activation function. The converted vectors are denoted by h0 sei and h0 dj. Then, we compute the biaffine attention score function. si j = h0 sei T Wh0 dj + Uh0 sei + V h0 dj + b; W 2 Rm⇥k⇥n, U 2 Rk⇥m, V 2 Rk⇥n, si j 2 Rk (3) where W, U, V, b are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector, respectively, si j means the score of the ith split point over different categories, and the k denotes the number of categories (for split point determination, k = 1; for nuclearity determination, k = 3; for discourse relation classification, k = 18 in English and k = 16 in Chinese). In this way, we can determine the split point, nuclearity and discourse relation jointly. From Eq. 3, we can find that the biaffine attention score function contains three parts, the encoding output, the decoding output, and the combination of the encoder and the decoder in a bilinear way. Among them, the encoding output can be viewed as the information about the current split point, while the decoding output indicates the information about the boundary points and the historical split point. 3.5 Model Training In comparison with transition-based approaches, our approach can not only maintain a linear parsing time, but also perform batch training and decoding in parallel. In particular, we optimize our discourse parsing model using the Negative Log Likelihood Loss (NLL Loss), which consists of three parts, i.e., the Split Point Prediction Loss (Ls), the Nuclearity Prediction Loss (Ln), and the Relation Prediction Loss (Lr). Among them, the split point prediction loss is used to maximize the probability of selecting the correct split point at each decoding step. Here, we use Eq. 4 to compute the loss, assuming that the correct split point number at the ith step of the decoder is j. Ls = X batch X steps −log(ˆps i|✓) (4) ˆps i = ssplit i,j P ssplit i (5) Similarly, the Nuclearity Prediction Loss and the Relation Prediction Loss are to maximize the probability of correct nuclear position and discourse relation for each correct split point determined by the decoder respectively. Since the convergence speed of these three parts is different during the training process, we take the combined one (Eq. 6) as the final loss function and adjust the parameters on the development set. L = ↵sLs + ↵nLn + ↵rLr (6) 4 Experimentation In this section, we systematically evaluate our topdown text-level discourse parser. 4.1 Experimental Setting 4.1.1 Datasets In this paper, we employ both the English RST Discourse Treebank (RST-DT) (Carlson and Marcu, 2001) and the Chinese Connective-driven Discourse TreeBank (CDTB) (Li et al., 2014c) as the benchmark data sets. In an RST-style discourse tree, the leaf nodes are non-overlapping text spans called elementary discourse units (EDUs), and internal nodes are the concatenation of continuous EDUs. Adjacent nodes are related through particular discourse relations to form a discourse subtree, which is related to other adjacent nodes in the tree structure. In this way, the hierarchical tree structure is established. The English RST-DT corpus is annotated under the framework of RST. Each document is represented as one DT. It consists of 385 documents (347 for training and 38 for testing) from the Wall Street Journal. We randomly select 34 documents from the training set as our development set. 6391 Parameter English Chinese POS Embedding 30 30 EDU Encoder BiGRU 256 256 Encoder BiGRU 256 256 Decoder GRU 512 512 bi-directional GRU 256 256 uni-directional GRU 512 512 Dropout 0.2 0.33 Split Point Biaffine Attention MLP 64 64 Nuclear Biaffine Attention MLP 64 32 Relation Biaffine Attention MLP 64 128 Epoch 20 20 Batch Size 10 64 Learning Rate 0.001 0.001 ↵s 0.3 0.3 ↵n 1.0 1.0 ↵r 1.0 1.0 Table 1: Experimental parameter settings. The Chinese CDTB corpus is motivated by taking both advantages of the English RST-DT corpus (e.g. the tree structure, the nuclearity representation) and the PDTB corpus (e.g., the connectivedriven predict-argument structure) (Prasad et al., 2008). In the Chinese CDTB corpus, each paragraph is marked as a Connective-driven Discourse Tree (CDT), where its leaf nodes are EDUs, its intermediate nodes represent (insertable) connectives (i.e., discourse relations), and EDUs connected by connectives can be combined into higher level discourse units. Currently, the Chinese CDTB corpus consists of 500 newswire articles, which are further divided into 2336 paragraphs with a CDT representation for one paragraph and 10650 EDUs in total. We divide the corpus into three parts, i.e., 425 training documents containing 2002 discourse trees and 6967 discourse relations, 25 development documents containing 105 discourse trees and 396 discourse relations, 50 test documents containing 229 discourse trees and 993 discourse relations. 4.1.2 Evaluation Metrics To evaluate the parsing performance, we use three standard ways to measure the performance: unlabeled (i.e., hierarchical spans) and labeled (i.e., nuclearity and relation) F-scores. Same as previous studies, we evaluate our system with gold EDU segmentation and binarize those non-binary subtrees with right-branching. We use the 18 fine-grained relations defined in (Carlson and Marcu, 2001) and the 16 fine-grained relations defined in (Li et al., 2014c) to evaluate the relation metric for English and Chinese respectively. In order to avoid the problem that the perSystems Bare Nuc Rel Full EN Top-down(Ours) 67.2 55.5 45.3 44.3 Ji&Eisenstein(2014)+ 64.1 54.2 46.8 46.3 Feng&Hirst(2014)+ 68.6 55.9 45.8 44.6 Li et al.(2016)+ 64.5 54.0 38.1 36.6 Braud et al.(2016) 59.5 47.2 34.7 34.3 Braud et al.(2017)⇤ 62.7 54.5 45.5 45.1 CN Top-down(Ours) 85.2 57.3 53.3 45.7 Sun&Kong(2018)(Dup) 84.8 55.8 52.1 47.7 Table 2: Performance Comparison.(Bare, bare DRS generation. Nuc, nuclearity determination. Rel, rhetorical relation classification. Full, full discourse parsing. The sign + means the systems with additional handcrafted features including syntactic, contextual and so on, ⇤means with additional cross-lingual features.) formance with RST-Parseval evaluation (Marcu, 2000) looks unreasonably high, we follow Morey et al. (2018), which adopts the standard Parseval procedure. For fair comparison, we report microaveraged F1 scores by default. 4.1.3 Hyper-parameters We use the word embedding representation based on the 300D vectors provided by Glove (2014)1 and Qiu(2018) for English and Chinese respectively, and do not update the weights of these vectors during training, while the POS embedding uses the random initialization method and is optimized with our model. We fine-tune the hyper-parameters on the development set as shown in Table 1. 4.2 Experimental Results 4.2.1 Overall Performance First, Table 2 compares the detailed performance of our top-down discourse parser with the state-ofthe-art on gold standard EDUs. For English RST-style text-level discourse parsing, we evaluate our top-down discourse parser on the RST-DT corpus and compare our model with five state-of-the-art systems as mentioned in Morey (2018) using the same evaluation metrics.2 • Ji and Eisenstein (2014), a shift-reduce parser that learns the representation of discourse units 1Impact of other pre-trained word embedding is limited. For example, ELMo can improve the full-score about 0.6%. 2We evaluate the discourse parsers proposed by Lin et al. (2019) and Liu et al. (2019) in text-level discourse parsing. However, their achieved performances are much lower than the state-of-the-art systems. The main reason is that their proposed encoders are tailored to small text spans in sentencelevel discourse parsing and are not suitable for large text spans in text-level discourse parsing. In following experiments, we no longer compare our system with them. 6392 and trains an SVM classifier jointly with a lot of hand-crafted features. • Feng and Hirst (2014), a two stage greedy parser with linear-chain CRF models. • Li et al. (2016), an attention-based hierarchical model along with hand-crafted features. • Braud et al. (2016), a sequence-to-sequence parser that is heuristically constrained to build trees with a hierarchical neural model. • Braud et al. (2017), a transition-based neural model with a lot of cross-lingual features. For Chinese CDT-style text-level discourse parsing, there are much fewer studies. Sun and Kong (2018) propose a complete transition-based Chinese discourse structure generation framework. However, they only concerned tree structure generation and did not consider discourse relation classification. In fact, just as noted in Wang et al. (2017), a transition-based model is more appropriate for parsing the bare discourse tree structure due to the data sparsity problem. In addition, since relation classification can benefit from the bare tree structure, a two stage parsing strategy can normally achieve better performance. In comparison, with the support of local contextual information of split points and global high-level discourse structure information, our top-down architecture is able to identify the discourse structure and discourse relations jointly. For fair comparison, we duplicate the approach proposed by Sun and Kong (2018), and evaluate it under the same experimental settings 3. We call this system as the duplicated system (denoted as “Dup”). Table 2 shows that, • For English, our top-down system achieves comparable performance with the state-of-the-art systems. It is worthwhile to note that, we focus on the effectiveness of our proposed top-down architecture in this paper. The performance of our top-down system is achieved without any other additional features, while other systems employ 3Sun and Kong (2018) reported their performance using macro-averaged F1 scores. In fact, it increases the weight of shorter documents. For Chinese CDTB, each paragraph is represented as a CDT. Statistics on the distribution of CDT heights shows that, one CDT contains about 4.5 EDUs on average, with the average height about 3.42. In this paper, we report the performance using micro-averaged F1 scores. Furthermore, to gain detailed comparison between the bottom-up and the top-down approaches, we also report the performance of relation classification and full discourse parsing. language Bare Nuclearity Relation Full EN 62.3 50.1 40.7 39.6 CN 80.2 53.2 48.5 41.7 Table 3: Performance under a full automatic setting. various additional features. For example, both Ji and Eisenstein (2014) and Feng and Hirst (2014) employed many kinds of additional hand-crafted features including syntactic, contextual and so on, while Braud et al. (2017) resort to additional cross-lingual features and achieve the gain of 3.2, 7.3, 10.8 and 10.8 on the four evaluation metrics respectively in comparison with Braud et al. (2016). This indicates the great preference of top-down over bottom-up text-level DRS parsing. This also suggests the great potential of additional carefully designed features, which are worth exploring in the future work. • For Chinese, our top-down text-level DRS parser significantly outperforms Sun and Kong (2018) on bare DRS generation, nuclearity determination and relation classification with all p-values smaller than 0.01 on significate testing. However, we find that our top-down approach achieves relatively poor performance on Full discourse parsing. This maybe due to the effectiveness of the joint learning framework as employed in Sun and Kong (2018). Traditional shift-reduce approaches cast the parsing task as a triple (i.e., shift/reduce action, nuclearity and relation type) identification task, and learn/predict the triple simultaneously, while our top-down approach divides the discourse parsing task into three independent sub-tasks, i.e., split point ranking, nuclearity determination and relation classification, and optimize our discourse parsing model only using the Negative Log Likelihood Loss. This also applies to the English discourse parser discussed above. • Comparing the results for English and Chinese, Chinese text-level discourse parsing looks better on all performance metrics. This maybe due to the difference between annotation strategies. In English RST-DT corpus, each document is represented as one DT, while in Chinese CDTB, each paragraph is represented as a CDT. As a result, the CDTs generally contain fewer EDUs and are relatively short in height. 6393 Bare Nuc Rel Full Height Std # " # " # " # " 1 385 339 321 251 221 233 215 213 200 2 220 183 184 117 115 116 111 94 101 3 139 119 122 71 82 71 73 59 71 4 88 75 78 52 58 44 42 39 40 5 44 34 37 17 21 16 21 10 16 6 26 18 21 13 13 6 9 6 9 7 18 16 18 7 8 6 9 2 5 >= 8 13 11 10 0 0 0 0 0 0 Overall 933 795 791 535 521 497 486 426 445 Table 4: Performance over different DT levels. (“#”- Top down approach, “"”- Bottom up approach) 4.2.2 End-to-end Performance Next, Table 3 shows the performance of the endto-end text-level discourse parser under a full automatic setting. Here, we use the two EDU detectors proposed by Li et al. (2018) and Li et al. (2013) to achieve the auto EDUs for English and Chinese respectively, and the berkeley parser4 to achieve automatic parse trees. From the results shown in Table 3 we can find that, in comparison with the overall performance using gold standard EDUs shown in Table 2, there is a significant performance reduction on all the indicators. This indicates the heavy impact of EDU segmentation. 4.2.3 Detailed Analysis Finally, we take Chinese as an example for a detailed comparative analysis. We duplicate the approach proposed by Sun and Kong (2018) and take this duplicated system as the representative of the bottom-up approach. Table 4 first compares the results over different DT levels with the gold standard numbers and the correctly identified numbers. It should be noted that, correctly determined nuclearity means both the bare tree node and its nuclearity are correctly recognized. Correctly determined relation means both the bare node and its relation are correctly recognized, and full means all three aspects are correctly recognized. From the results we can find that, in comparison with the bottom-up approach, the top-down approach can achieve better performance on Bare, Nuc and Rel metrics, while for Full-metric, the performance reduces slightly. Just as noted above, this is due to the difference between the joint learning frameworks behind these two approaches. Among three aspects, the improvement of nuclearity is most, and bare tree structure is weakest. At each level, the performance of these 4https://github.com/slavpetrov/berkeleyparser Approach NN NS SN # 67.0 42.2 33.7 " 67.6 35.4 24.5 Table 5: Performance on nuclearity determination. EDU Num Bare Nuc Rel " 1–5 94.8 57.9 52.0 6–10 87.0 60.7 58.6 11–15 78.0 50.1 45.4 16–20 56.2 25.0 25.0 21–25 68.9 47.0 42.4 26–30 65.4 26.9 11.5 # 1–5 97.0 67.1 56.6 6–10 86.0 57.3 59.9 11–15 75.2 50.3 41.4 16–20 56.2 25.0 25.0 21–25 76.6 57.7 40.8 26–30 69.2 42.3 19.2 Table 6: Performance over different EDU numbers. two approaches varies. This suggests that the bidirectional architecture may be an important direction in the future work. Since the improvement of nuclearity is significant, we then list the detailed results of these two approaches over different nuclearity categories. Table 5 shows that our top-down approach can determine the “NS” and “SN” much better than the bottom-up approach. This is consistent with human perception. We finally divide the DTs into six groups by EDU number and evaluate the two approaches over different groups. Table 6 shows the results. We can find that, our top-down approach achieves better performance on the first, fifth and sixth sets (i.e., the EDU number is 1–5, 21-25 and 26-30 respectively). This suggests that the proposed top-down approach may be more suitable for both end of DTs with others comparable. 6394 5 Conclusion In this paper, we propose a top-down neural architecture to text-level discourse parsing. In particular, we cast the discourse parsing task as a EDU split point ranking task, where a split point is classified to different levels according to its rank, and the EDUs associated with the split point are arranged accordingly. In this way, we can determine the complete discourse rhetorical structure as a hierarchical tree structure. Specifically, after encoding the EDUs and EDU split points, a encoder-decoder with an internal stack is employed to generate discourse tree recursively. Experimentation on the English RST-DT corpus and the Chinese CDTB corpus shows the great effectiveness of our proposed approach. In the future work, we will focus on more effective discourse parsing with additional carefully designed features and joint learning with EDU segmentation. Acknowledgements The authors would like to thank the anonymous reviewers for the helpful comments. We are greatly grateful to Cheng Sun for his inspiring ideas and preliminary work. This work is supported by Artificial Intelligence Emergency Project 61751206 under the National Natural Science Foundation of China, Project 61876118 under the National Natural Science Foundation of China and the Priority Academic Program Development of Jiangsu Higher Education Institutions. References Chlo´e Braud, Maximin Coavoux, and Anders Søgaard. 2017. Cross-lingual RST discourse parsing. arXiv preprint arXiv:1701.02946. Chlo´e Braud, Barbara Plank, and Anders Søgaard. 2016. Multi-view and multi-task training of RST discourse parsers. In Proceedings of COLING 2016, pages 1903–1913. Lynn Carlson and Daniel Marcu. 2001. Discourse tagging reference manual. ISI Technical Report ISI-TR545, 54:56. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of EMNLP 2014, pages 1724–1734. Eunsol Choi, Hannah Rashkin, Luke Zettlemoyer, and Yejin Choi. 2016. Document-level sentiment inference with social, faction, and discourse context. In Proceedings of ACL 2016, pages 333–343. Qianyin Dai, Longyin Zhang, and Fang Kong. 2019. Event temporal relation identification based on dependency and discourse relation. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In Proceedings of ICLR 2017. Vanessa Wei Feng and Graeme Hirst. 2014. A lineartime bottom-up discourse parser with constraints and post-editing. In Proceedings of ACL 2014, pages 511–521. Naman Goyal and Jacob Eisenstein. 2016. A joint model of rhetorical discourse structure and summarization. In Proceedings of the Workshop on Structured Prediction for NLP, pages 25–34. Michael Heilman and Kenji Sagae. 2015. Fast rhetorical structure theory discourse parsing. arXiv preprint arXiv:1505.02425. Hugo Hernault, Helmut Prendinger, Mitsuru Ishizuka, et al. 2010. Hilda: A discourse parser using support vector machine classification. Dialogue & Discourse, 1(3). Yangfeng Ji and Jacob Eisenstein. 2014. Representation learning for text-level discourse parsing. In Proceedings of ACL 2014, pages 13–24. Yangfeng Ji and Noah A. Smith. 2017. Neural discourse structure for text categorization. In Proceedings of ACL 2017, pages 996–1005. Shafiq Joty, Giuseppe Carenini, Raymond Ng, and Yashar Mehdad. 2013. Combining intra-and multisentential rhetorical parsing for document-level discourse analysis. In Proceedings of ACL 2013, pages 486–496. Jing Li, Aixin Sun, and Shafiq Joty. 2018. Segbot: A generic neural text segmentation model with pointer network. In IJCAI, pages 4166–4172. Jiwei Li, Rumeng Li, and Eduard Hovy. 2014a. Recursive deep models for discourse parsing. In Proceedings of EMNLP 2014, pages 2061–2069. Qi Li, Tianshi Li, and Baobao Chang. 2016. Discourse parsing with attention-based hierarchical neural networks. In Proceedings of EMNLP 2016, pages 362– 371. Sujian Li, Liang Wang, Ziqiang Cao, and Wenjie Li. 2014b. Text-level discourse dependency parsing. In Proceedings of ACL 2014, pages 25–35. Yancui Li, wenhe Feng, jing Sun, Fang Kong, and Guodong Zhou. 2014c. Building chinese discourse corpus with connective-driven dependency tree structure. In Proceedings of EMNLP 2014, pages 2105–2114. 6395 Yancui Li, Wenhe Feng, Guodong Zhou, and Kunhua Zhu. 2013. Research of Chinese clause identificiton based on comma. Acta Scientiarum Naturalium Universitatis Pekinensis, 49(1):7–14. Xiang Lin, Shafiq Joty, Prathyusha Jwalapuram, and M Saiful Bari. 2019. A unified linear-time framework for sentence-level discourse parsing. In Proceedings of ACL 2019, pages 4190–4200. Linlin Liu, Xiang Lin, Shafiq Joty, Simeng Han, and Lidong Bing. 2019. Hierarchical pointer net parsing. In Proceedings of EMNLP 2019, pages 1006–1016. William Mann and Sandra Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):243–281. Daniel Marcu. 2000. The Theory and Practice of Discourse Parsing and Summarization. MIT Press. Mathieu Morey, Philippe Muller, and Nicholas Asher. 2018. A dependency perspective on RST discourse parsing and evaluation. Computational Linguistics, pages 198–235. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP 2014, pages 1532–1543. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The penn discourse treebank 2.0. In LREC 2008. Yuanyuan Qiu, Hongzheng Li, Shen Li, Yingdi Jiang, Renfen Hu, and Lijiao Yang. 2018. Revisiting correlations between intrinsic and extrinsic evaluations of word embeddings. In CCL & NLP-NABD 2017, pages 209–221. Springer. Cheng Sheng, Fang Kong, and Guodong Zhou. 2017. Towards better Chinese zero pronoun resolution from discourse perspective. In Processings of NLPCC 2017, pages 406–418. Cheng Sun and Fang Kong. 2018. A transition-based framework for Chinese discourse structure parsing. Journal of Chinese Information Processing, 32(12):26–34. Yizhong Wang, Sujian Li, and Houfeng Wang. 2017. A two-stage parsing method for text-level discourse analysis. In Proceedings of ACL 2017: short paper, pages 184–188. Nan Yu, Meishan Zhang, and Guohong Fu. 2018. Transition-based neural RST parsing with implicit syntax features. In Proceedings of the 27th International Conference on Computational Linguistics, pages 559–570.
2020
569
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 609–618 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 609 Learning Low-Resource End-To-End Goal-Oriented Dialog for Fast and Reliable System Deployment Yinpei Dai†, Hangyu Li†, Chengguang Tang†, Yongbin Li†∗, Jian Sun†, Xiaodan Zhu‡ †Alibaba Group, Beijing ‡Ingenuity Labs Research Institute & ECE, Queen’s University {yinpei.dyp,hangyu.lhy,chengguang.tcg}@alibaba-inc.com {shuide.lyb,jian.sun}@alibaba-inc.com, [email protected] Abstract Existing end-to-end dialog systems perform less effectively when data is scarce. To obtain an acceptable success in real-life online services with only a handful of training examples, both fast adaptability and reliable performance are highly desirable for dialog systems. In this paper, we propose the Meta-Dialog System (MDS), which combines the advantages of both meta-learning approaches and human-machine collaboration. We evaluate our methods on a new extended-bAbI dataset and a transformed MultiWOZ dataset for lowresource goal-oriented dialog learning. Experimental results show that MDS significantly outperforms non-meta-learning baselines and can achieve more than 90% per-turn accuracies with only 10 dialogs on the extendedbAbI dataset. 1 Introduction End-to-end neural models have shown a great potential in building flexible goal-oriented dialog systems. They can be directly trained on past dialogs without any domain-specific handcrafting, which makes it easy to automatically scale up to new domains (Bordes et al., 2017). However, these models are normally data-hungry and have only been successfully applied to domains with rich datasets (Perez et al., 2017; Luo et al., 2019; Kim et al., 2019). In real-world scenarios, common issues with end-to-end dialog models include: (1) the shortage of proper training dialogs because of the high cost of data collection and cleaning, i.e., the data scarcity problem (Zhao and Eskenazi, 2018), and (2) a large gap between limited data and unknown online test examples, i.e., the covariate shift effect (Liu et al.). Such problems can lead to a significant performance degradation in dialog systems, which ∗Corresponding author may harm the users’ experience and result in loss of customers in commercial applications. Therefore, both fast adaptability and reliable performance are strongly desirable for practical system deployment. Fast adaptability reflects the efficiency of adapting dialog systems to domains with low-resource data. Reliable performance reflects the robustness of handling unpredictable user behaviors in online services. To boost the online performance of dialog systems, there have been some recent work (Rajendran et al., 2019; Wang et al., 2019; Lu et al., 2019) on designing end-to-end models in a human-machine joint-teaming manner. For instance, the dialog system in (Rajendran et al., 2019) can identify an ongoing dialog during testing when the system might fail and transfer it to a human agent. But all these methods are trained with sufficient data, which hinders the possibility of rapidly prototyping the models in new domains with restricted resources. In this paper, we formulate the low-resource goal-oriented dialog learning as a few-shot learning problem, where a limited numbers of dialogs are used for training and the remaining for the test. We propose the Meta-Dialog System (MDS), an end-toend human-machine teaming framework optimized by the model-agnostic meta-learning (MAML) algorithm (Finn et al., 2017). In general, MDS learns to make prediction and requests human by finding good initial parameters, which can be adapted to new tasks fast and reliably by using fewer dialogs. We evaluate our methods on a new multi-domain dialog dataset called extended-bAbI. Results show that MDS achieves obvious performance improvement over baselines and attains more than 90% per-turn accuracy on new domains with only 10 dialogs. We also perform experiments on MultiWOZ dataset (Eric et al., 2019) which has been transformed into simplified bAbI format and observe similar superior results with MDS. 610 In summary, the main contributions of this paper are three-fold: (1) To the best of our knowledge, this is the first study on applying meta-learning to retrieval-based end-to-end goal-oriented dialog systems; (2) we leverage the MAML algorithm to optimize a human-machine collaborative dialog system and show very promising results on the lowresource dialog tasks; and (3) we propose a new dataset and hope that can help bring forward the research in this area. 2 The Proposed Method In this section, we first introduce the problem definition and our new dataset; we then elaborate the framework of MDS and meta-learning procedures. Problem Definition. We focus on the retrievalbased goal-oriented dialog tasks (Perez et al., 2017), where a training data di usually contains a triple (Hi, yi, R). Hi denotes the dialog history consisting of all user utterances and system responses up to the current turn, R is a set of given candidate responses and yi is the index of the correct response in R. The main task is to train an end-to-end dialog model to predict yi from R based on Hi. Extended-bAbI Dataset. The original bAbI dataset (Bordes et al., 2017) is not suitable for lowresource settings due to the lack of domains and tasks. We extend it into a multi-domain dataset through complicated simulation rules and construct templates with a more diversity to raise the difficulty. There are 7 domains in total: restaurant, flights, hotels, movies, music, tourism and weather, each of which has its own ontology and the candidate response set. Similar to (Bordes et al., 2017), a complete dialog in extended-bAbI contains four phases of interactions: (1) the system asks for required attributes to constrain the search and issues the first API call; (2) the user updates their requests for revised API calls; (3) the system confirms for multiple times to determine the entity the user wants; (4) the user requests more attributes for extra information based on the final entity. The total number of dialogs is 21,000 and the detailed examples and statistics are given in Appendix A.1. 2.1 Model Architecture In MDS, there is an encoding module to extract neural features of dialogs and a policy module to make system actions of either predicting responses or requesting human. All modules are jointly optimized with the MAML algorithm. The main framework of training MDS is shown in Figure 1. Encoding Module. It contains a history encoder to compute the dialog state vector si for Hi and a response encoder to compute the response embedding rj for the j-th response in R. The dimensions of si and rj are set as the same. In this paper, we use the MemN2N (Sukhbaatar et al., 2015) as the history encoder and a simple additive model for the response encoder, but many other models optimized by gradient descent may be applied here. Policy Module. This module consists of a switch S that makes a binary decision whether to request human to select the response, and a response predictor P that predicts the right response itself if human is not requested. We assume that the response chosen by human is always correct. For the optimization of P, the widely used largemargin cosine loss (Wang et al., 2018; Lin and Xu, 2019) is employed since it maximizes the decision margin in the angular space and is able to force the model to learn more discriminative deep features. Suppose a batch of training data is D = {d1, ...di, ..., d|D|}, then the formulation is: LLMC = |D| X i=1 −log ea·(cos(si,ryi)−b) ea·(cos(si,ryi)−b) + P j̸=yi ea·cos(si,rj) (1) where cos(·, ·) is a function that calculates the cosine similarity of two input vectors. a is the scaling factor and b is the cosine margin (a = 30, m = 0.1 in our experiments). In the test phase, the model predicts an answer according to the maximal cosine angle y∗ i = argmaxj cos(si, rj). The switch S is a neural binary classifier that also takes si and each rj as input and calculate the decision probability of requesting human as follows: wij = esT i Wrj/ X|R| k=1 esT i Wrk (2) ci = X|R| j=1 wijrj (3) fi = si ⊕ci (4) pi = σ(FC(fi)) (5) where σ is the sigmoid function and ⊕the concatenation function for vectors. FC(·) is a fullyconnected neural network with one hidden layer that has half size of the input layer and is activated 611 Figure 1: An overview of training the Meta-Dialog System. by tanh function. |R| is the size of R and W is a trainable square matrix. Learning to switch. Since there are no actual labels for S to indicate whether it is correct to ask human or not, some previous work (Woodward and Finn, 2016; Rajendran et al., 2019) proposes to use the REINFORCE algorithm (Williams, 1992) for weakly-supervised training, but their reward settings fail to penalize the case when the model asks human while it can give right prediction, which may lead to redundant requests. To consider this effect, we propose a new reward definition here. For the batch data D, we calculate the F1 scores1 for positive data and negative data, respectively, and take the average of them to get a scalar value score(D). Then each data di ∈D is assigned with a reward by computing an incremental value as below: rt = score(D) −score(D −di) (6) Through maximizing such rewards, the switch S learns to be more effective and asks human when it is necessary. The reinforcement learning loss for S is LRL = P|D| i=1 −ri log pi, and the final loss of our model is L = LLMC + LRL. 2.2 Training Procedure We rewrite the final loss L as L(Mθ, D) for clarity, where Mθ denotes the dialog model with trainable parameters θ and D is the batch data for training. During meta-learning, we first choose one domain as the target domain and the rest as source domains. Then we uniformly sample K different domains T = {τ1, . . . , τK} from source domains as meta-tasks. For each meta-task τk, we sample N data as the support set Dsup k and other N data with the same answers as the query set Dque k . 1Detailed explanations can be found in Appendix A.2. Algorithm 1 Meta-learning for MDS Input: The learning rates α, β Output: optimal meta-learned model 1: Initialize model parameters θ randomly 2: while not converged do 3: Sample T from source domains and prepare Dsup k , Dque k 4: for each τk do 5: Evaluate L(Mθ, Dsup k ) 6: Compute θ ′ k = θ −α∇θL(Mθ, Dsup k ) 7: Evaluate L(Mθ′ k, Dque k ) 8: end for 9: Update θ ←θ −β∇θ PK k=1 L(Mθ′ k, Dque k ) 10: end while Mθ is first updated on support sets for each τk: θ ′ k = θ −α∇θL(Mθ, Dsup k ) (7) Then Mθ is evaluated on each Dque k with θ ′ k respectively and is optimized as follows: θ ←θ −β∇θ XK k=1 L(Mθ′ k, Dque k ) (8) where α, β are learning rates. By training on multiple tasks via MAML, Mθ can learn good initial parameters that is applicable on new tasks or domains (Finn et al., 2017; Mi et al., 2019). The algorithm is summarised in Algorithm 1. After this meta-learning as pre-training, we finetune Mθ on the target domain with the first L dialogs of its training set, where L is a small number. To mimic the situation of online testing, we evaluate Mθ on the whole test sets and regard those unseen user utterances as new user behaviours. 3 Experiments and Results In our experiments, we first verify the capability of MDS on our newly simulated dialog dataset 612 extended-bAbI, and then conduct extra evaluation on the more realistic dataset MultiWOZ 2.1 (Eric et al., 2019). 3.1 Setup We select each domain as the target domain in turn and take the average of the results in all domains. Metric. Following (Wang et al., 2019), we report the user-perceived per-turn accuracy (‘per-turn accuracy’ is used in the remainder of the paper), where the prediction of one turn is considered correct if the model either selects the right response by itself or asks human. To be fair, we also report the human request rate. The less the request rate and higher per-turn accuracy are, the more reliable the model performs online. Implementation details. For the meta-learning, we use SGD for the inner loop and Adam for the outer loop with learning rate α=0.01 and β=0.001. The meta-task size K is 4 and the support or query set size N is 16. For the standard MLE training, we use Adam with a learning rate of 0.001 and set the batch size as 32. Both schemes are trained for a maximum of 5000 iterations with early stopping on the validation set. During fine-tuning on new domains, we use SGD with the learning rate 0.01 for all models and report the final results after fine-tuning 10 iterations on L training dialogs of the target domain, where L=0, 1, 5, 10. The word vector size is 25 and all MemN2Ns take 3 hops. 3.2 Baselines We compare MDS with the following baselines: • Mem: A MemN2N (Sukhbaatar et al., 2015) model trained with standard MLE. • MetaMem: A MemN2N trained with MAML. Both Mem and MetaMem can not request human. • Mem+C: A MemN2N model combined with a binary classifier in (Rajendran et al., 2019), which has different objective functions and optimization. • IDS: The incremental dialog system used in (Wang et al., 2019), which requests human by estimating the uncertainty through a variational autoencoder. • MDS-switch: A MDS without the switch S. • MDSrand: A MDS whose switch is replaced with a random classifier that has the same request rate. • MDSmle: A MDS whose meta-learning optimization is replaced with standard MLE. Figure 2: The per-turn accuracy of different methods on the test set during fine-tuning with 1 dialog adaptation where the target domain is restaurant. 3.3 Results on Extended-bAbI Table 1 shows few-shot adaptation results for different methods. MDS significantly outperforms other models under all adaptation sizes of new dialogs and can achieve a 91.31% per-turn accuracy on average with only 10 new dialogs. There is a gap between methods without the switch (such as Mem, MetaMem and MDS-switch) and methods with the switch in Table 1, indicating that the switch S is crucial for improving the overall per-turn accuracy because of the human agent. However, without proper objective functions and meta-learning optimization, Mem+C and IDS2 have poorer performances in both metrics than MDS even if they contain the switch module. In the ablation study, we see a steady increase of about 10% per-turn accuracy from the comparison between MDS and MDSrand, suggesting that the switch does identify intractable dialogs. MDSmle is the closest baseline to MDS, but we still observe an obvious improvement, which means joint optimization of S and P via meta-learning allows faster and better adaptation while maintaining similar request rates. Appendix A.3 illustrates detailed case studies for different methods. To further investigate the adaptation process, we present the fine-tuning curves for different methods with 1 dialog adaptation in Figure 2. As it can be seen, MDS achieves the best accuracy at the beginning and converges fastest as well, showing that it can transfer on new tasks quickly by finding better parameter initialization. 2We only report the result of IDS with 10 dialog adaptation since its request rates are too high to be fair in other settings. 613 Method No adaptation Adapt with 1 dialog Adapt with 5 dialogs Adapt with 10 dialogs accuracy request accuracy request accuracy request accuracy request Mem 32.28±1.86 n.a. 45.02±1.39 n.a. 64.07±0.76 n.a. 71.56±0.48 n.a. MetaMem 39.45±1.13 n.a. 48.95±1.18 n.a. 65.57±0.69 n.a. 72.19±0.81 n.a. Mem+C 58.74±2.89 37.34±5.23 68.27±2.19 34.83±4.35 81.41±2.26 36.96±5.05 87.46±2.07 38.09±5.29 IDS 90.91±4.29 83.98±6.43 MDS-switch 41.03±0.98 n.a. 50.31±1.16 n.a. 65.72±1.13 n.a. 72.35±0.90 n.a. MDSrand 61.05±1.20 34.75 66.02±0.91 32.31 77.27±0.76 34.31 79.70±0.98 35.26 MDSmle 59.89±3.11 34.36±6.09 69.40±2.25 32.46±4.06 83.04±2.07 33.90±5.22 88.13±1.63 35.28±5.08 MDS 64.93±2.39 34.75±5.87 74.71±2.15 32.31±4.34 86.49±2.01 34.31±4.36 91.31±1.16 35.26±4.23 Table 1: Few-shot results on the extended-bAbI dataset. The numbers represent the average of means and standard deviations of Task 5 in all target domains. Each experiment run 10 times with different seeds; ’n.a.’ means no switch in the model; ’accuracy’ is the user-perceived per-turn accuracy and ’request’ is the request rate. 3.4 Results on MultiWOZ MultiWOZ (Budzianowski et al., 2018) is a widelyused multi-domain Wizard-of-Oz dialog dataset spanning 7 distinct domains and containing 10k dialogs. This realistic dataset has been a standard benchmark for various dialog tasks such as belief tracking and policy optimization. In our experiment, we use the corrected version MultiWOZ 2.1 (Eric et al., 2019) for evaluation. To translate the MultiWOZ dialogs into bAbI-format data, we first delexicalize the slot-values in user utterances using dialog labels, and then produce a set of canonical system acts as the candidate responses by simplifying the original dialog acts. Only dialogs containing single domain are used in our experiments and a MultiWOZ dialog sample is given in Appendix A.4. Table 2 shows the adaptation results for different models on MultiWOZ 2.1. It can be seen that MDS still largely outperforms other models with the adaptation of 10 dialogs. The degradation of per-turn accuracy from extended-bAbI to MultiWOZ is reasonable since the user utterance is more diverse and the dialog policy is more flexible. 4 Related Work End-to-end neural approaches of building dialog systems have attracted increasing research interest. The work of (Bordes et al., 2017) is the first attempt to solve goal-oriented dialog tasks with endto-end models. Further improvements has been made in (Williams et al., 2017) to combine explicit domain-specific knowledge and implicit RNN features. Luo et al. (2019) take user personalities into consideration for better user satisfaction. Rajendran et al. (2018) learn dialogs with multiple possible answers. Our work is inspired by the work of (Rajendran et al., 2019; Wang et al., 2019), which Method Adapt with 10 dialogs accuracy request Mem 56.87±1.63 n.a. MetaMem 62.78±2.05 n.a. Mem+C 80.59±3.13 38.18±5.01 MDS-switch 64.50±3.75 n.a. MDSrand 74.78±4.35 38.34 MDSmle 80.92±3.02 37.91±4.20 MDS 83.52±3.30 38.34±6.96 Table 2: Few-shot test results on MultiWOZ 2.1. propose to solve unseen user behaviors through human-machine teamwork. The research of (Liu et al.; Chen et al., 2017; Lu et al., 2019) also show the advantages of incorporating the role of human to teach online. However, dialog learning in lowresource scenarios has not been investigated. Meta-learning aims to learn new tasks rapidly with a few training examples (Sung et al., 2018; Finn et al., 2017), which fits well to our task. There have been some work applying meta-learning to other tasks in dialog research, such as that in (Dou et al., 2019; Geng et al., 2019) for natural language understanding and (Qian and Yu, 2019; Mi et al., 2019) for natural language generation. 5 Conclusion and Future Work In this paper, we leverage the MAML algorithm to optimize a human-machine collaborative dialog system, which shows good results for both fast adaptability and reliable performance. In the future, we plan to use more powerful encoders and evaluate our methods on real dialog data. Acknowledgments The research of the last author is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). 614 References Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog. ICLR. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I˜nigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gaˇsi´c. 2018. MultiWOZ - a large-scale multi-domain wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026, Brussels, Belgium. Association for Computational Linguistics. Lu Chen, Xiang Zhou, Cheng Chang, Runzhe Yang, and Kai Yu. 2017. Agent-aware dropout DQN for safe and efficient on-line dialogue policy learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2454–2464, Copenhagen, Denmark. Association for Computational Linguistics. Zi-Yi Dou, Keyi Yu, and Antonios Anastasopoulos. 2019. Investigating meta-learning algorithms for low-resource natural language understanding tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1192– 1197, Hong Kong, China. Association for Computational Linguistics. Mihail Eric, Rahul Goel, Shachi Paul, Adarsh Kumar, Abhishek Sethi, Peter Ku, Anuj Kumar Goyal, Sanchit Agarwal, Shuyang Gao, and Dilek Hakkani-Tur. 2019. Multiwoz 2.1: A consolidated multi-domain dialogue dataset with state corrections and state tracking baselines. arXiv preprint, 1907.01669. Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1126–1135. JMLR. org. Ruiying Geng, Binhua Li, Yongbin Li, Xiaodan Zhu, Ping Jian, and Jian Sun. 2019. Induction networks for few-shot text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3902–3911, Hong Kong, China. Association for Computational Linguistics. Byoungjae Kim, KyungTae Chung, Jeongpil Lee, Jungyun Seo, and Myoung-Wan Koo. 2019. A bilstm memory network for end-to-end goal-oriented dialog learning. Computer Speech & Language, 53:217–230. Ting-En Lin and Hua Xu. 2019. Deep unknown intent detection with margin loss. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5491–5496, Florence, Italy. Association for Computational Linguistics. Bing Liu, Gokhan T¨ur, Dilek Hakkani-T¨ur, Pararth Shah, and Larry Heck. Dialogue learning with human teaching and feedback in end-to-end trainable task-oriented dialogue systems. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Yichao Lu, Manisha Srivastava, Jared Kramer, Heba Elfardy, Andrea Kahn, Song Wang, and Vikas Bhardwaj. 2019. Goal-oriented end-to-end conversational models with profile features in a real-world setting. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pages 48–55, Minneapolis - Minnesota. Association for Computational Linguistics. Liangchen Luo, Wenhao Huang, Qi Zeng, Zaiqing Nie, and Xu Sun. 2019. Learning personalized end-to-end goal-oriented dialog. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6794–6801. Fei Mi, Minlie Huang, Jiyong Zhang, and Boi Faltings. 2019. Meta-learning for low-resource natural language generation in task-oriented dialogue systems. AAAI. Julien Perez, Y-Lan Boureau, and Antoine Bordes. 2017. Dialog system & technology challenge 6 overview of track 1-end-to-end goal-oriented dialog learning. Dialog System Technology Challenges, 6. Kun Qian and Zhou Yu. 2019. Domain adaptive dialog generation via meta learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2639–2649, Florence, Italy. Association for Computational Linguistics. Janarthanan Rajendran, Jatin Ganhotra, and Lazaros C Polymenakos. 2019. Learning end-to-end goaloriented dialog with maximal user task success and minimal human agent use. Transactions of the Association for Computational Linguistics, 7:375–386. Janarthanan Rajendran, Jatin Ganhotra, Satinder Singh, and Lazaros Polymenakos. 2018. Learning endto-end goal-oriented dialog with multiple answers. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3834–3843, Brussels, Belgium. Association for Computational Linguistics. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Advances in neural information processing systems, pages 2440–2448. Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference 615 on Computer Vision and Pattern Recognition, pages 1199–1208. Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. 2018. Cosface: Large margin cosine loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5265–5274. Weikang Wang, Jiajun Zhang, Qian Li, Mei-Yuh Hwang, Chengqing Zong, and Zhifei Li. 2019. Incremental learning from scratch for task-oriented dialogue systems. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3710–3720, Florence, Italy. Association for Computational Linguistics. Jason D. Williams, Kavosh Asadi, and Geoffrey Zweig. 2017. Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 665– 677, Vancouver, Canada. Association for Computational Linguistics. Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256. Mark Woodward and Chelsea Finn. 2016. Active oneshot learning. NIPS (Deep Reinforcement Learning Workshop). Tiancheng Zhao and Maxine Eskenazi. 2018. Zeroshot dialog generation with cross-domain latent actions. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 1– 10, Melbourne, Australia. Association for Computational Linguistics. 616 A Appendices A.1 Extended-bAbI dialog dataset We extend bAbI dataset (Bordes et al., 2017) into a larger dialog dataset consisting of multiple domains, where each domain has its own ontology and the candidate response set. The main task is a reponse retrieval problem, where the dialog system needs to select the right response for current dialog history from the given candidate response set. The size of candidate sets in each domain are shown in Table 3. The total number of dialogs for each task is 3000 (1500/500/1000 for train/dev/test set respectively). More statistics are given in Table 4. Detailed dialog samples of extended-bAbI can be found in Table 6. Domain # responses restaurant 333 flights 71 hotels 472 movies 68 music 56 tourism 47 weather 22 Table 3: The number of candidate responses in each domain. Item number # of domains 7 # of dialog tasks 35 # of total system responses 1069 # of total templates for user utterances 685 # vocabulary size 386 # of sentences per dialog 12.4 # of words per sentence 4.9 Table 4: Statistics of extended-bAbI dataset. A.2 Reward Settings (Woodward and Finn, 2016; Rajendran et al., 2019) defined rewards for S as follows: • Rreq: if human is requested • Rcor: if human is not requested and the model prediction is correct • Rinc: if human is not requested and the model prediction is wrong This kind of reward setting did not penalize the case that human is requested but model prediction is correct, and the value of rewards is fixed during reinforcement learning. Here we propose better reward definition. Let a batch of data be D = {d1, d2, ...di, ...}. The label of each data is set as positive if the response predictor P selects a wrong answer and negative if not. The prediction of each data is set as positive if the switch S chooses human and negative if not. Then we calculate the F1 scores for positive data and negative data separately and take the average of them to get a scalar value score(D). Each data di in the batch D is assigned to a reward value by computing the difference of total scores with or without di ri = score(D) −score(D −di) In this way, the reward function is less sensitive to the model prediction accuracy and can self-adjust dynamically during training. Both situations that human is asked while the model predicts correct and human is not asked while the model predicts wrong are given negative rewards. A.3 Case Studies There are two sample dialog generated by different models adapted with 10 dialogs on extended-bAbI in Table 5. It shows that MDS can predict the response correctly and choose to request human properly. A.4 MultiWOZ Data Table 7 shows a bAbI-format dialog example transferred from MultiWOZ. The data and code for dialog transformation of MultiWOZ and dialog generation of exteneded-bAbI are attached in the supplemental material. 617 sample 1 target domain = music Usr how do you do Sys hello what can i do for you Usr i ’d like to hear taylor ’s music Sys what kind of music do you like Usr input i would like to hear some country songs Correct answer api call taylor country Predictions: Mem hello what can i do for you [incorrect] MetaMem what kind of music do you like [incorrect] Mem+C what do you think of this song:entityOrder=0 [incorrect][no-request] MDS api call taylor country [correct][request] sample 2 target domain = weather Usr hello Sys hello how i can help you Usr i want to know the weather in guangzhou Sys which date do you want to check Usr input i want to know the weather today Correct answer api call london today Predictions: Mem which date do you want to check [incorrect] MetaMem api call guangzhou today [correct] Mem+C hello how i can help you [incorrect][no-request] MDS api call guangzhou today [correct][no request] Table 5: Prediction of different methods on two sampled dialogs. The flag [incorrect] or [correct] indicate the result of response prediction. The flag [request] or [no-request] indicate whether human is requested or not. 618 Domain: music Usr: what ’s up Sys: how can i help you Usr: how can i find some music Sys: any singer in your mind Usr: i ’d like some songs by my idol taylor Sys: what kind of music do you like Usr: do you have country music Sys: api call taylor country Usr: nice Sys: what do you think of this song : entityOrder=0 Usr: do you have other suggestions Sys: how about another song : entityOrder=1 Usr: let ’s do it Sys: great let me play this song for you Usr: what is the album of the music Sys: here you are entityOrder=3 album Usr: thanks a lot Sys: can i help you with anything else Usr: no thank you Sys: you are welcome Domain: weather Usr: i want to know the weather tomorrow Sys: which city would you like to check Usr: hangzhou please Sys: api call hangzhou tomorrow Usr: what ’s the weather today Sys: api call hangzhou today Usr: thanks Sys: it ’s my pleasure Table 6: Two dialog samples from music and weather domains on extended-bAbI. Domain: restaurant Usr: I would like a v price restaurant which serves v food food. Sys: api call v food v price Usr: What is their address and what part of town are they in? Sys: Inform Addr Area Usr: Sorry what type of food do they serve? Sys: Inform Food Usr: Thank you. goodbye. Sys: general bye Table 7: An example of bAbI-format MultiWOZ dialog.
2020
57
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6396–6407 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6396 Amalgamation of protein sequence, structure and textual information for improving protein-protein interaction identification Pratik Dutta, Sriparna Saha Department of Computer Science & Engineering Indian Institute of Technology Patna (pratik.pcs16, sriparna)@iitp.ac.in Abstract An in-depth exploration of protein-protein interactions (PPI) is essential to understand the metabolism in addition to the regulations of biological entities like proteins, carbohydrates, and many more. Most of the recent PPI tasks in BioNLP domain have been carried out solely using textual data. In this paper, we argue that incorporation of multimodal cues can improve the automatic identification of PPI. As a first step towards enabling the development of multimodal approaches for PPI identification, we have developed two multimodal datasets which are extensions and multimodal versions of two popular benchmark PPI corpora (BioInfer and HRPD50). Besides, existing textual modalities, two new modalities, 3D protein structure and underlying genomic sequence, are also added to each instance. Further, a novel deep multi-modal architecture is also implemented to efficiently predict the protein interactions from the developed datasets. A detailed experimental analysis reveals the superiority of the multi-modal approach in comparison to the strong baselines including uni-modal approaches and state-of the-art methods over both the generated multimodal datasets. The developed multi-modal datasets are available for use at https:// github.com/sduttap16/MM_PPI_NLP. 1 Introduction Understanding protein-protein interactions (PPI) is indispensable to comprehend different biological processes such as translation, protein functions (Kulmanov et al., 2017), gene functions (Dutta and Saha, 2017; Dutta et al., 2019b), metabolic pathways, etc. The PPI information helps researchers to discover disease mechanisms and plays seminal role in designing the therapeutic drugs (Goncearenco et al., 2017). Over the years, a significant amount of protein-protein interaction information has been published in scientific articles in unstructured text formats. However, in recent years, there has been an exponential rise in the number of biomedical publications (Khare et al., 2014). Therefore, it becomes imperative, urgent and of extreme interest to develop an intelligent information extraction system to assist biologists in curating and maintaining PPI databases. This pressing need has motivated Biomedical Natural Language Processing (BioNLP) researchers to automatically extract PPI information by exploring various AI techniques. Recent advancements in deep learning (LeCun et al., 2015)(Bengio et al., 2007) have opened up new avenues in solving different well-known problems ranging from computational biology (Alipanahi et al., 2015; Dutta et al., 2019a), machine translations (Cho et al., 2014), image captioning (Chen et al., 2017). Subsequently, there is a notable trend in using deep learning for solving different natural language processing (NLP) tasks in the biomedical and clinical domains (Asada et al., 2018; Alimova and Tutubalina, 2019) including the identification of protein-protein interactions from biomedical corpora (Yadav et al., 2019; Peng and Lu, 2017). Multi-modal deep learning models, combining information from multiple sources/modalities, show promising results compared to the conventional single modal-based models while solving various NLP tasks like sentiment and emotion recognition (Qureshi et al., 2019, 2020), natural language generation, machine translation (Poria et al., 2018; Zhang et al., 2019; Qiao et al., 2019; Fan et al., 2019) etc. There exist few popular multi-modal datasets which are extensively used in solving various problems in NLP like emotion recognition from conversations (Poria et al., 2018; Chen et al., 2018), image captioning (Lin et al., 2014), sentiment analysis (Zadeh et al., 2016), etc. Compared to single modal-based approaches, multi-modal techniques provide a more comprehensive perspective of the 6397 dataset under consideration. Despite the popularity of multi-modal approaches in solving traditional NLP tasks, there is a dearth of multi-modal datasets in BioNLP domain especially for the PPI identification task. The available PPI benchmark datasets contain solely the textual knowledge of different protein pairs, which do not help in anticipating the molecular properties of the proteins. Hence, along with the textual information, incorporation of molecular structure or underlying genomic sequence can aid in understanding the regulations of the protein interactions. The integration of multi-modal features can help in obtaining deeper insights but the concept of multimodal architecture, for textual and biological aspects, has not been cultivated much in the BioNLP domain (Peissig et al., 2012; Jin et al., 2018). 1.1 Motivation and Contribution The main motivation for this research work is to generate multi-modal datasets for PPI identification task, where along with the textual information present in the biomedical literature, we did explore the genetic and structure information of the proteins. The biomedical and clinical text database is an important resource for learning about physical interactions amongst protein molecules; however, it may not be adequate for exploring biological aspects of these interactions. In the field of Bioinformatics, there are various web-based enriched archives12 that contain multi-omics biological information regarding protein interactions. The integration of multi-omics information from these aforementioned databases helps in understanding the various physiological characteristics (Sun et al., 2019; Ray et al., 2014; Amemiya et al., 2019; Hsieh et al., 2017; Dutta et al., 2020). Hence, in our current work, along with the textual information from biomedical corpora, we have also incorporated structural properties of protein molecules as biological information for solving PPI task. For structural information of proteins, we have considered the atomic structure (3D PDB structure) and underlying nucleotide sequence (FASTA sequence) of protein molecules. In the BioNLP domain, collection of biological data (muti-omics information) from the text corpus is little difficult. To obtain the aforementioned information about other modalities, we need to exploit different web-based archives that 1https://www.cancer.gov 2https://www.ncbi.nlm.nih.gov/ are meant for biological structures. Drawing inspirations from these findings, we have generated a protein-protein interaction-based multi-modal dataset which includes not only textual information, but also the structural counterparts of the proteins. Finally, a novel deep multimodal architecture is developed to efficiently predict the protein-protein interactions by considering all modalities. The main contributions of this study are summarized as follows: 1. For this study, we extend and further improve two biomedical corpora containing PPI information for multi-modal scenario by manually annotating and web-crawling two different bio-enriched archives. 2. Our proposed multi-modal architecture uses self-attention mechanism to integrate the extracted features of different modalities. 3. This work is a step towards integrating multiomics information with text-mining from biomedical articles for enhancing PPI identification. To the best of our knowledge, this is the first attempt in this direction. 4. The results and the comparative study prove the effectiveness of our developed multimodal datasets along with proposed multimodal architecture. 2 Related Works There are few works (Ono et al., 2001; Blaschke et al., 1999; Huang et al., 2004) which focus on rule-based PPI information extraction method such as co-occurrence rules (Stapley and Benoit, 1999) from the biomedical texts. In (Giuliano et al., 2006), relation is extracted from entire sentence by considering the shallow syntactic information. (Erkan et al., 2007) utilize semi-supervised learning and cosine similarity to find the shortest dependency path (SDP) between protein entities. Some important kernel-based methods for PPI extraction task are graph kernel (Airola et al., 2008a), bagof-word (BoW) kernel (Sætre et al., 2007), editdistance kernel (Erkan et al., 2007) and all-path kernel (Airola et al., 2008b). (Yadav et al., 2019) presented an attention-based bidirectional long shortterm memory networks (BiLSTM) model that uses SDP between protein pairs, latent PoS and position 6398 Generated Instances of our multi-modal dataset Protein pairs Gene pairs PDB ID pairs Ensembl ID pairs Interaction type Protein1 Protein2 Gene1 Gene2 PDB1 PDB2 Ensembl1 Ensembl2 Megalin and cubilin: multifunctional endocytic receptors PROTEIN1 and PROTEIN2 are two structurally different endocytic receptors that interact to serve such functions Megalin cubilin LRP2 CUBN 2M0P 3KQ4 ENSG00000081479 ENSG00000107611 TRUE Megalin and PROTEIN1: multifunctional endocytic receptors Megalin and PROTEIN2 are two structurally different endocytic receptors that interact to serve such functions cubilin cubilin CUBN CUBN 3KQ4 3KQ4 ENSG00000107611 ENSG00000107611 FALSE PROTEIN1 and cubilin: multifunctional endocytic receptors Megalin and PROTEIN2 are two structurally different endocytic receptors that interact to serve such functions cubilin Megalin CUBN LRP2 3KQ4 2M0P ENSG00000107611 ENSG00000081479 FALSE Megalin and PROTEIN1: multifunctional endocytic receptors PROTEIN2 and cubilin are two structurally different endocytic receptors that interact to serve such functions cubilin Megalin CUBN LRP2 3KQ4 2M0P ENSG00000107611 ENSG00000081479 FALSE PROTEIN1 and PROTEIN2: multifunctional endocytic receptors Megalin and cubilin are two structurally different endocytic receptors that interact to serve such functions cubilin Megalin CUBN LRP2 3KQ4 2M0P ENSG00000107611 ENSG00000081479 FALSE PROTEIN1 and cubilin: multifunctional endocytic receptors PROTEIN2 and cubilin are two structurally different endocytic receptors that interact to serve such functions Megalin Megalin LRP2 LRP2 2M0P 2M0P ENSG00000081479 ENSG00000081479 FALSE Megalin and cubilin: multifunctional endocytic receptors Megalin and cubilin are two structurally different endocytic receptors that interact to serve such functions An Instance from HRPD50 Obtained 3D structure of proteins from PDB ID Obtained FASTA sequence of proteins from Ensembl ID Generated multi-modal instances from an instance of HRPD50 biomeedical corpora. Figure 1: An example of generating instances along with the structural and sequence counterparts of our multimodal dataset from HRPD50 dataset. PDB ID and Ensembl ID are utilized for obtaining protein 3D atomic structure and underlying FASTA sequence, respectively. embeddings for PPI extraction. Some of the popular deep learning based PPI extraction techniques are reported by (Shweta et al., 2016; Zhao et al., 2016; Hua and Quan, 2016; Hsieh et al., 2017). 3 Dataset Formation and Preprocessing In this study, we have extended, improved, and further developed two popular benchmark PPI corpora, namely BioInfer3 and HRPD504 dataset for the multi-modal scenario. Along with the textual information, these enhanced multi-modal datasets contain the biological counterparts of the interacting or non-interacting protein pairs. Biological information comes from the underlying FASTA sequence and the atomic structures of interacting protein pairs. 3http://corpora.informatik.hu-berlin.de/ 4https://goo.gl/M5tEJj Figure 2: Statistics of positive and negative instances across our developed multi-modal datasets. 3.1 Dataset Preparation Firstly, we have extracted data, primarily consisting of two and more protein entities, from the XML representations of two PPI corpora mentioned earlier. To simplify this complex relations among multiple protein entities, we have considered only a single protein pair at a time and found out if they are interacting or not. Among these relations, we have considered positive instances that are directly mentioned in the dataset. The other interactions are considered as non-interacting proteins, i.e., negative instances. Consider an instance of HRPD50 dataset, ”Megalin and cubilin: multifunctional endocytic receptors Megalin and cubilin are two structurally different endocytic receptors that interact to serve such functions”(Figure 1). In this particular example, we have four protein entities but we have considered the interactions between two proteins at a time and arrived at six possible relations (shown in table of Figure 1). Among these relations, only one pair (Megalin, cubilin) is denoted as interacting proteins in the HRPD50 dataset. Hence, the number of instances in our dataset is much higher than those in BioInfer and HRPD50 datasets. After generating both positive and negative instances, next we have downloaded other two modalities. To download the genomic sequence and the 3d structure of proteins, the ensemble ID and PDB ID of the proteins are required to be known. But all the biological archives contain the relationships between gene and PDB ID or Ensemble ID instead 6399 Figure 3: An overview of the proposed deep multi-modal architecture for predicting protein-protein interactions. For each modality, we have designed different deep learning based models which are finally integrated using selfattention mechanism. of any relationship between the proteins and aforementioned IDs. Hence, we have used manual annotation to find out the respective gene names of each protein name and then python based methodologies to find out Ensembl ID and PDB ID of each of these genes. These IDs help us in downloading the underlying genomic sequence (FASTA sequence) from 5 and structures of these proteins (3D PDB structure) from the RCSB Protein Data Bank 6 archive. The pre-processing and generation of the multimodal datasets from the biomedical corpora 5https://useast.ensembl.org/index.html 6http://www.rcsb.org/ are pictorially depicted in Figure 1. The complete exemplified multi-modal datasets are available at the provided GitHub link. 3.2 Dataset Annotation and Statistics A major challenge in creating the dataset is to manually encode the relationships between genes and proteins, a many to many mapping for biological reasons. Hence, to find out the genes which are more related to a particular protein, we asked three annotators who have strong biological knowledge. The disagreement between the annotators was less than 1% and the disagreement is solved by the ma6400 jority voting. The total number of instances of the developed multi-modal datasets are shown in Figure 2. 4 Problem Formalization Our goal is to develop a deep multi-modal architecture that can efficiently predict whether two proteins are interact with each other or not from the developed multi-modal datasets. Formally, consider the multi-modal dataset D = {Si}N i=1 = {(Ii Text,Ii Struc,Ii Seq)}N i=1 consisting of N instances. ∀i ∈{1,2,...,N},Ii Text,Ii StrucandIi Seq represent the textual, structural and sequence modality of Si sentence/instance, respectively. The proposed PPI task for an instance Si is mathematically formulated as fact(fsa(M1(Ii Text),M2(Ii Struc),M3(Ii Seq))) Here M1,M2,M3 are three different deep learning based models for text, structure and sequence modality, respectively. The extracted features are fused by self attention mechanism (fsa) which is finally fed to an activation function(fact) for predicting protein interactions. 5 Proposed Methodology The major steps of our proposed multi-modal architecture are shown in Figure 3. 5.1 Feature Extraction from Textual Modality The proposed deep learning model (M1) for extracting features from textual modality is described in Figure 4. Firstly, we use BioBERT v1.1(Lee et al., 2019) model to provide a vector representation (ui ∈Rd) of the textual instance (Ii Text). With almost same architecture of BERT (Bidirectional Encoder Representation from Transformers) model (Devlin et al., 2018), BioBERT v1.1 is pre-trained on 1M PubMed abstracts. Here, each sentence is embedded as a unique vector of size 768 (i.e., d=768) by averaging the last four transformer layers of the first token ([CLS]) of BioBERT model. Inspired by the efficient usage of stacked Bidirectional long short term memory (BiLSTM)(Yadav et al., 2019), we use this to encode the embedded representation (ui). In stacked BiLSTM, the lth level BiLSTM computes the forward ( Ð→ hl ui) and backward hidden states ( ←Ð hl ui) which are then concatenated and fed to the next (l + 1)th level of Figure 4: Proposed hybrid model combining BioBERT and stacked BiLSTM for the Textual modality. BilSTM layer. Therefore, the final representation (F i Text) of Ii Text is obtained from the last layer (L) of the stacked BiLSTM model as F i Text = M1(Ii Text) = [ Ð→ hL ui ⊕ ←Ð hL ui] (1) 5.2 Sequence Feature Extraction Firstly, we have downloaded the FASTA sequence of protein pairs of an instance (Si) from Ensembl genome browser. In this modality, each protein (Ii Seq) is represented as string of four nucleotides, i.e., Ii Seq = {A,T,G,C}+. The underlying genomic sequence is considered as a separate channel of the text modality. Since molecular properties of protein molecules are heavily dependend on the sequence of nucleotides, we apply capsule network (Sabour et al., 2017) to capture the spatial information between the nucleotides. In this regard, firstly, we have converted all four nucleotides into one-hot vector representation, i.e., the protein is represented as a 2D matrix, O = {0,1}4×m where m is the number of nucleotides in the sequence. Now, three convolutional layers (fconv) are applied on O where the output of the third layer is fed to the primary capsule. Finally, the output of the primary capsule is fed to secondary capsule which 6401 Figure 5: Capsule network-based deep model for extracting features from underlying genomic sequence of proteins. Figure 6: Graph convolutional neural network-based deep model for extracting features from molecular structure of proteins. provides the final representation (F i Seq) of the sequence modality. The final feature vector obtained from the developed deep architecture (M2) is F i Seq = M2(Ii Seq) = fcapsule(fCONV (O)) 5.3 Structural Feature Extraction For the structure modality, firstly we have downloaded protein 3D structure from RCSB protein data bank website and obtained the atomic coordinates from the PDB file. Among all the modalities, structural modality is the most relevant modality for inferring biological information. In this modality, we have considered the atomic structure of the proteins. Inspired by the inherent capabilities of graph convolutional neural network (Kipf and Welling, 2016; Zamora-Resendiz and Crivelli, 2019) for understanding the effective latent representation of the graph, we have used it to learn a local neighborhood representation around each atom of the proteins. For this structural modality, the developed model (Figure 6) learns the chemical bonding information from the atomic structure of the proteins rather from its corresponding image. Each protein, which consists of a set of atoms {a1,a2,...,an}, has an adjacency matrix, A ∈{0,1}n×n, and a node feature matrix, X ∈Rn×dv. In this study, we have considered two proteins (P1,P2) in an instance and extracted the insightful features (y1,y2) using GCNN and then concatenated them for the final representation (F i Struc). The GCNN takes A 6402 and X as inputs of the proteins and the structural feature represented as F i Struc = M(Ii Struc) = [y1 ⊕y2] (2) yj∣j∈{1,2} = f(Hi j,Aj) = σ(Aj,Hi j,W i j) (3) Here, ⊕,f,σ are the concatenation operator, a non-linear activation function and the propagation rule, respectively. W i j is the weight matrix of layer i of protein Pj and Hi j is defined as f(Hi−1 j ,A) where H0 j = Xj. 5.4 Attention-based Multi-modal Integration After extracting the features of three modalities (textual, protein sequence and protein structure), we have fused the features using attention mechanism. Attention mechanism has the ability to focus on the features which are the most relevant to a context specific task. In this study, we have used self-attention mechanism of the transformer model which concatenates the final integrated feature representations (F) of ith instance (Si) using the following formula. F = [W i TextF i Text ⊕W i SeqF i Seq ⊕W i StrcF i Strc] (4) Here, Wi represents the attention weight of ith modality. Finally, this final representation (F) is fed to softmax layer for final classification. 6 Experimental Results and Analysis In this section, we have briefly described the details of the hyper-parameters and the comparative analysis of the proposed deep multi-modal architecture. To explore the role of developed multi-modal datasets along with the proposed multi-modal architecture for predicting the protein interactions, several experiments are conducted for evaluating each modality and also different combinations of the modalities. Additionally, we have compared the performance of our multi-modal approach with various state-of-the-art methods. 6.1 Details of Hyper-parameters In our proposed multi-modal architecture, for the final classification we have used softmax. Adam optimizer is used through out the multi-modal architecture. In stacked BiLSTM model for textual modality, 6 (i.e., L=6) layers of BiLSTM are used. In case of structural features, graph convolutional neural network with two hidden layers is used. For sequence modality, capsule network followed by three ReLU convolutional layers are used. In the developed capsule network, the number of primary capsules are eight along with two secondary capsules. Finally, self-attention of transformer model is utilized for integrating the features of different modalities. For self-attention, we have used three encoders which are followed by a fully connected network with two hidden layers. The output of the fully connected network is then fed to softmax for final classification. 6.2 Comparative analysis with baselines For baselines, we have compared our multi-modal approach with three uni-modal, three bi-modal and two other multi-modal architectures. • Textual modality BioBERT and stacked BiLSTM are utilized for this model. • Protein sequence modality Capsule network is utilized to understand the underlying features extracted from the protein sequences. • Protein structural modality Inspired by the effective performance of GCNN in understanding the graph representation, GCNN is applied on atomic structure of proteins. • 3D structural + sequence modality In this bimodal architecture, GCNN and capsule network are used for structural and sequence modality, respectively. Finally, self-attention is utilized to understand the integrated features of these two modalities. • Textual + sequence modality In this model, self-attention is applied on the extracted features of textual and sequence modality. • Textual + 3D structure modality: To learn the different attributes discussed in the text and protein structural modality, self-attention mechanism is applied to fuse them. • Multi-modal approach 1 This architecture of this baseline is the same as the proposed multimodal approach, except the learned features of each modality are simply concatenated instead of using any attention mechanism. • Multi-modal approach 2 In this model, attention mechanism is applied for integrating 6403 Textual modality Protein sequence modality Protein structural modality Textual + sequence modality Textual + 3D structure modality 3D structural + sequence modality Multi-modal approach 1 Multi-modal approach 2 Proposed approach BioInfer Precision 54.42 50.63 59.34 64.51 69.04 68.15 79.16 83.77 86.81 Recall 87.45 83.68 91.63 87.45 88.49 89.53 87.44 86.40 89.53 F-measure 67.09 63.09 72.04 74.25 77.54 77.39 83.11 85.07 88.15 HRPD50 Precision 90.44 86.95 91.75 91.01 94.79 93.57 96.51 96.61 96.93 Recall 58.67 41.32 69.01 62.81 75.21 75.21 74.38 76.44 78.51 F-measure 71.17 56.02 78.77 74.32 83.87 83.39 84.01 85.35 86.75 Table 1: Comparative study of our proposed deep multi-modal approach with several baselines in terms of precision, recall, F-measure the features of textual, protein sequence and structural modalities. For extracting the features from textual, protein sequence and protein structure, we use BioBERT, BiLSTM and CNN, respectively. The results reported in Table 1 illustrate the supremacy of the proposed multi-modal approach over other baselines. 6.3 Comparison with State-of-the-art Additionally, along with the baselines, we have compared the performance of our multi-modal approach with several existing works reported in the literature. For BioInfer dataset, we have compared our proposed method with nine state-of-theart models. These existing methods are based on different techniques like kernel-based (Choi and Myaeng, 2010; Tikk et al., 2010; Qian and Zhou, 2012; Li et al., 2015), deep neural network-based (Zhao et al., 2016), multi-channel dependencybased convolutional neural network model (Peng and Lu, 2017), semantic feature embedding (Choi, 2018) and shortest dependency path (Hua and Quan, 2016). Along with the aforementioned methods, we have also compared our approach with a recent deep learning-based approach proposed by (Yadav et al., 2019). The comparative performance analysis for BioInfer dataset is tabulated in Table Precision Recall F-score Proposed Model 86.81 89.53 88.15 (Yadav et al., 2019) 80.81 82.57 81.68 (Hua and Quan, 2016) 73.40 77.00 75.20 (Choi, 2018) 72.05 77.51 74.68 (Qian and Zhou, 2012) 63.61 61.24 62.40 (Peng and Lu, 2017) 62.70 68.2 65.30 (Zhao et al., 2016) 53.90 72.9 61.60 (Tikk et al., 2010) 53.30 70.10 60.00 (Li et al., 2015) 72.33 74.94 73.61 (Choi and Myaeng, 2010) 74.50 70.90 72.60 Table 2: Comparative analysis of the proposed multimodal approach with state-of-the-art techniques for BioInfer dataset. 2. We have also compared our approach with nine existing approaches for HRPD50 dataset. The comparative results for HRPD50 dataset are presented in Table 3. 6.4 Discussion By analyzing the above comparative study, we can infer that the overall performance of our proposed multi-modal approach surpasses other baselines and existing methods. Among the baseline models, proposed multi-modal approach outperforms its unimodal and bimodal counterparts. Among the uni-modal architecture, structural modality outperforms other two modalities which suggests the importance of structural modality over textual and sequence modalities. The sequence modality performs poorly because of its huge length (length of most of the sequences is approx 10,000 nucleotides). Among the bimodal architectures, (textual + structural) model surpasses other bimodal and unimodal counterparts. This fusion shows improvements of 5.1% and 5.5% F-score values over the best unimodal architecture for HRPD50 and BioInfer data sets, respectively. Similarly, our proposed multi-modal architecture shows an improvement over bi-modal counterparts. Also, the proposed multi-modal architecture shows an averPrecision Recall F-score Proposed Model 96.93 78.51 86.75 (Yadav et al., 2019) 79.92 77.58 78.73 (Tikk et al., 2010) 68.20 69.80 67.80 (Tikk et al., 2010)(with SVM) 68.20 69.80 67.80 (Palaga, 2009) 66.70 80.20 70.90 (Airola et al., 2008a)(APG) 64.30 65.80 63.40 (Van Landeghem et al., 2008) 60.00 51.00 55.00 (Miwa et al., 2009) 68.50 76.10 70.90 (Airola et al., 2008a)(Co-occ) 38.90 100 55.40 (Pyysalo et al., 2008) 76.00 64.00 69.00 Table 3: Comparative analysis of the proposed multimodal approach with other state-of-the-art approaches for HRPD50 dataset. 6404 age improvement of 3.87% and 2.24% F-scores over multi-modal approach1 and multi-modal approach2, respectively. This improvement indicates that in addition to multiple modalities, underlying deep learning models and fusion technique contribute significantly in improving the performance of the overall architecture. In addition, Table 2 and Table 3 indicate that the proposed multi-modal architecture outperforms the best and recent existing methods for both BioInfer and HRPD50 dataset, respectively. We have performed Welch’s t-test to show that obtained improvements by the proposed approach are statistically significant. From the above comparative study, it is evident that our proposed multi-modal approach identifies the protein interactions in an efficient way and can be further improved in different ways. 6.5 Error Analysis After thoroughly analyzing false positive and false negative instances, it can be inferred that following are the possible reasons of errors: 1. The instances which contain huge number of protein entities lead to misclassification. The maximum number of proteins in an instance of HRPD50 and BioInfer are 26 and 24, respectively; this has a huge chance of misclassification. For example: “Mutations in Saccharomyces cerevisiae RFC5, DPB11, MEC1, DDC2, MEC3, PDS1, CHK1, PDS1, and DUN1 have increased the rate of genome rearrangements up to 200-fold whereas mutations in RAD9, RAD17, RAD24, BUB3, and MAD3 have little effect.” 2. Repetitive mentions of the same protein entity adds noise that leads to loose contextual information. For example “Here we demonstrate ... CLIP-170 and LIS1 Overexpression of CLIP-170 results ... phospho-LIS1 ... that CLIP-170 and LIS1 regulate ... that LIS1 is a regulated adapter between CLIP-170 ... MT dynamics”. 3. For sequence modality, we consider underlying FASTA sequence of proteins. The length of the sequence varies from 100 to 10000 nucleotides. This increased protein length leads to misclassification as the deep learning-based model is unable to possess this long chain of nucleotides. 7 Conclusion and Future Work In this work, we have generated some multi-modal protein-protein interaction databases by amalgamating protein structures and sequences with existing text information available in the biomedical literature. The process of generating multi-modal datasets from PPI corpora is illustrated with some examples. Besides, we have proposed a novel deep multi-modal architecture for managing the multimodal scenario for PPIs. For each modality (textual, protein sequence and protein atomic structure), we have developed different deep learning models for efficient feature extractions. A detailed comparative analysis proves that the proposed multi-modal architecture outperforms other strong baselines and existing models. Future work aims at enhancing sequence feature extraction methods to improve the classification performance as those suffer from low accuracy. Further there are plenty of options for improving the fusion technique to enhance the overall performance of the model. Acknowledgements Pratik Dutta acknowledges Visvesvaraya PhD Scheme for Electronics and IT, an initiative of Ministry of Electronics and Information Technology (MeitY), Government of India for fellowship support. Dr. Sriparna Saha gratefully acknowledges the Young Faculty Research Fellowship (YFRF) Award, supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia) for carrying out this research. References Antti Airola, Sampo Pyysalo, Jari Bj¨orne, Tapio Pahikkala, Filip Ginter, and Tapio Salakoski. 2008a. All-paths graph kernel for protein-protein interaction extraction with evaluation of cross-corpus learning. BMC bioinformatics, 9(11):S2. Antti Airola, Sampo Pyysalo, Jari Bj¨orne, Tapio Pahikkala, Filip Ginter, and Tapio Salakoski. 2008b. A graph kernel for protein-protein interaction extraction. In Proceedings of the workshop on current trends in biomedical natural language processing, pages 1–9. Association for Computational Linguistics. Ilseyar Alimova and Elena Tutubalina. 2019. Detecting adverse drug reactions from biomedical texts with 6405 neural networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 415– 421. Babak Alipanahi, Andrew Delong, Matthew T Weirauch, and Brendan J Frey. 2015. Predicting the sequence specificities of dna-and rna-binding proteins by deep learning. Nature biotechnology, 33(8):831. Takayuki Amemiya, M Michael Gromiha, Katsuhisa Horimoto, and Kazuhiko Fukui. 2019. Drug repositioning for dengue haemorrhagic fever by integrating multiple omics analyses. Scientific reports, 9(1):523. Masaki Asada, Makoto Miwa, and Yutaka Sasaki. 2018. Enhancing drug-drug interaction extraction from texts by molecular structure information. arXiv preprint arXiv:1805.05593. Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. 2007. Greedy layer-wise training of deep networks. In Advances in neural information processing systems, pages 153–160. Christian Blaschke, Miguel A Andrade, Christos A Ouzounis, and Alfonso Valencia. 1999. Automatic extraction of biological information from scientific text: protein-protein interactions. In Ismb, volume 7, pages 60–67. Long Chen, Hanwang Zhang, Jun Xiao, Liqiang Nie, Jian Shao, Wei Liu, and Tat-Seng Chua. 2017. Scacnn: Spatial and channel-wise attention in convolutional networks for image captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5659–5667. Sheng-Yeh Chen, Chao-Chun Hsu, Chuan-Chun Kuo, Lun-Wei Ku, et al. 2018. Emotionlines: An emotion corpus of multi-party conversations. arXiv preprint arXiv:1802.08379. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Sung-Pil Choi. 2018. Extraction of protein–protein interactions (ppis) from the literature by deep convolutional neural networks with various feature embeddings. Journal of Information Science, 44(1):60–73. Sung-Pil Choi and Sung-Hyon Myaeng. 2010. Simplicity is better: revisiting single kernel ppi extraction. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 206–214. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Pratik Dutta and Sriparna Saha. 2017. Fusion of expression values and protein interaction information using multi-objective optimization for improving gene clustering. Computers in biology and medicine, 89:31–43. Pratik Dutta, Sriparna Saha, Saraansh Chopra, and Varnika Miglani. 2019a. Ensembling of gene clusters utilizing deep learning and protein-protein interaction information. IEEE/ACM transactions on computational biology and bioinformatics. Pratik Dutta, Sriparna Saha, and Saurabh Gulati. 2019b. Graph-based hub gene selection technique using protein interaction information: Application to sample classification. IEEE journal of biomedical and health informatics, 23(6):2670–2676. Pratik Dutta, Sriparna Saha, Sanket Pai, and Aviral Kumar. 2020. A protein interaction information-based generative model for enhancing gene clustering. Scientific Reports (Nature Publisher Group), 10(1). Gunes Erkan, Arzucan Ozgur, and Dragomir R Radev. 2007. Semi-supervised classification for extracting protein interaction sentences using dependency parsing. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Chenyou Fan, Xiaofan Zhang, Shu Zhang, Wensheng Wang, Chi Zhang, and Heng Huang. 2019. Heterogeneous memory enhanced multimodal attention model for video question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1999–2007. Claudio Giuliano, Alberto Lavelli, and Lorenza Romano. 2006. Exploiting shallow linguistic information for relation extraction from biomedical literature. In 11th Conference of the European Chapter of the Association for Computational Linguistics. Alexander Goncearenco, Minghui Li, Franco L Simonetti, Benjamin A Shoemaker, and Anna R Panchenko. 2017. Exploring protein-protein interactions as drug targets for anti-cancer therapy with in silico workflows. In Proteomics for Drug Discovery, pages 221–236. Springer. Yu-Lun Hsieh, Yung-Chun Chang, Nai-Wen Chang, and Wen-Lian Hsu. 2017. Identifying proteinprotein interactions in biomedical literature using recurrent neural networks with long short-term memory. In Proceedings of the eighth international joint conference on natural language processing (volume 2: short papers), pages 240–245. Lei Hua and Chanqin Quan. 2016. A shortest dependency path based convolutional neural network for protein-protein relation extraction. BioMed research international, 2016. 6406 Minlie Huang, Xiaoyan Zhu, Yu Hao, Donald G Payan, Kunbin Qu, and Ming Li. 2004. Discovering patterns to extract protein–protein interactions from full texts. Bioinformatics, 20(18):3604–3612. Mengqi Jin, Mohammad Taha Bahadori, Aaron Colak, Parminder Bhatia, Busra Celikkaya, Ram Bhakta, Selvan Senthivel, Mohammed Khalilia, Daniel Navarro, Borui Zhang, et al. 2018. Improving hospital mortality prediction with medical named entities and multimodal learning. arXiv preprint arXiv:1811.12276. Ritu Khare, Robert Leaman, and Zhiyong Lu. 2014. Accessing biomedical literature in the current information landscape. In Biomedical Literature Mining, pages 11–31. Springer. Thomas N Kipf and Max Welling. 2016. Semisupervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Maxat Kulmanov, Mohammed Asif Khan, and Robert Hoehndorf. 2017. Deepgo: predicting protein functions from sequence and interactions using a deep ontology-aware classifier. Bioinformatics, 34(4):660–668. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature, 521(7553):436. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics. Lishuang Li, Rui Guo, Zhenchao Jiang, and Degen Huang. 2015. An approach to improve kernel-based protein–protein interaction extraction by learning from large-scale network data. Methods, 83:44–50. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer. Makoto Miwa, Rune Sætre, Yusuke Miyao, and Jun’ichi Tsujii. 2009. Protein–protein interaction extraction by leveraging multiple kernels and parsers. International journal of medical informatics, 78(12):e39–e46. Toshihide Ono, Haretsugu Hishigaki, Akira Tanigami, and Toshihisa Takagi. 2001. Automated extraction of information on protein–protein interactions from the biological literature. Bioinformatics, 17(2):155– 161. Peter Palaga. 2009. Extracting relations from biomedical texts using syntactic information. M´emoire de DEA, Technische Universit¨at Berlin, 138. Peggy L Peissig, Luke V Rasmussen, Richard L Berg, James G Linneman, Catherine A McCarty, Carol Waudby, Lin Chen, Joshua C Denny, Russell A Wilke, Jyotishman Pathak, et al. 2012. Importance of multi-modal approaches to effectively identify cataract cases from electronic health records. Journal of the American Medical Informatics Association, 19(2):225–234. Yifan Peng and Zhiyong Lu. 2017. Deep learning for extracting protein-protein interactions from biomedical literature. arXiv preprint arXiv:1706.01556. Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Gautam Naik, Erik Cambria, and Rada Mihalcea. 2018. Meld: A multimodal multi-party dataset for emotion recognition in conversations. arXiv preprint arXiv:1810.02508. Sampo Pyysalo, Antti Airola, Juho Heimonen, Jari Bj¨orne, Filip Ginter, and Tapio Salakoski. 2008. Comparative analysis of five protein-protein interaction corpora. In BMC bioinformatics, volume 9, page S6. BioMed Central. Longhua Qian and Guodong Zhou. 2012. Tree kernelbased protein–protein interaction extraction from biomedical literature. Journal of biomedical informatics, 45(3):535–543. Zhi Qiao, Xian Wu, Shen Ge, and Wei Fan. 2019. Mnn: multimodal attentional neural networks for diagnosis prediction. Extraction, 1:A1. Syed Arbaaz Qureshi, Ga¨el Dias, Mohammed Hasanuzzaman, and Sriparna Saha. 2020. Improving depression level estimation by concurrently learning emotion intensity. IEEE Computational Intelligence Magazine. Syed Arbaaz Qureshi, Sriparna Saha, Mohammed Hasanuzzaman, and Ga¨el Dias. 2019. Multitask representation learning for multimodal estimation of depression level. IEEE Intelligent Systems, 34(5):45– 52. Bisakha Ray, Mikael Henaff, Sisi Ma, Efstratios Efstathiadis, Eric R Peskin, Marco Picone, Tito Poli, Constantin F Aliferis, and Alexander Statnikov. 2014. Information content and analysis methods for multi-modal high-throughput biomedical data. Scientific reports, 4:4411. Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. 2017. Dynamic routing between capsules. In Advances in neural information processing systems, pages 3856–3866. Rune Sætre, Kenji Sagae, and Jun’ichi Tsujii. 2007. Syntactic features for protein-protein interaction extraction. LBM (Short Papers), 319. Shweta, A. Ekbal, S. Saha, and P. Bhattacharyya. 2016. A deep learning architecture for protein-protein interaction article identification. In 2016 23rd International Conference on Pattern Recognition (ICPR), pages 3128–3133. 6407 Benjamin J Stapley and Gerry Benoit. 1999. Biobibliometrics: information retrieval and visualization from co-occurrences of gene names in medline abstracts. In Biocomputing 2000, pages 529–540. World Scientific. Dongdong Sun, Minghui Wang, and Ao Li. 2019. A multimodal deep neural network for human breast cancer prognosis prediction by integrating multidimensional data. IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB), 16(3):841–850. Domonkos Tikk, Philippe Thomas, Peter Palaga, J¨org Hakenberg, and Ulf Leser. 2010. A comprehensive benchmark of kernel methods to extract protein– protein interactions from literature. PLoS computational biology, 6(7):e1000837. Sofie Van Landeghem, Yvan Saeys, Bernard De Baets, and Yves Van de Peer. 2008. Extracting proteinprotein interactions from text using rich feature vectors and feature selection. In 3rd International symposium on Semantic Mining in Biomedicine (SMBM 2008), pages 77–84. Turku Centre for Computer Sciences (TUCS). Shweta Yadav, Asif Ekbal, Sriparna Saha, Ankit Kumar, and Pushpak Bhattacharyya. 2019. Feature assisted stacked attentive shortest dependency path based bi-lstm model for protein–protein interaction. Knowledge-Based Systems, 166:18–29. Amir Zadeh, Rowan Zellers, Eli Pincus, and LouisPhilippe Morency. 2016. Mosi: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. arXiv preprint arXiv:1606.06259. Rafael Zamora-Resendiz and Silvia Crivelli. 2019. Structural learning of proteins using graph convolutional neural networks. bioRxiv, page 610444. Shifeng Zhang, Xiaobo Wang, Ajian Liu, Chenxu Zhao, Jun Wan, Sergio Escalera, Hailin Shi, Zezheng Wang, and Stan Z Li. 2019. A dataset and benchmark for large-scale multi-modal face antispoofing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 919–928. Zhehuan Zhao, Zhihao Yang, Hongfei Lin, Jian Wang, and Song Gao. 2016. A protein-protein interaction extraction approach based on deep neural network. International Journal of Data Mining and Bioinformatics, 15(2):145–164.
2020
570
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6408–6418 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6408 Bipartite Flat-Graph Network for Nested Named Entity Recognition Ying Luo and Hai Zhao∗ Department of Computer Science and Engineering, Shanghai Jiao Tong University Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University, Shanghai, China MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, China [email protected], [email protected] Abstract In this paper, we propose a novel bipartite flatgraph network (BiFlaG) for nested named entity recognition (NER), which contains two subgraph modules: a flat NER module for outermost entities and a graph module for all the entities located in inner layers. Bidirectional LSTM (BiLSTM) and graph convolutional network (GCN) are adopted to jointly learn flat entities and their inner dependencies. Different from previous models, which only consider the unidirectional delivery of information from innermost layers to outer ones (or outside-toinside), our model effectively captures the bidirectional interaction between them. We first use the entities recognized by the flat NER module to construct an entity graph, which is fed to the next graph module. The richer representation learned from graph module carries the dependencies of inner entities and can be exploited to improve outermost entity predictions. Experimental results on three standard nested NER datasets demonstrate that our BiFlaG outperforms previous state-of-the-art models. 1 Introduction Named entity recognition (NER) aims to identify words or phrases that contain the names of predefined categories like location, organization or medical codes. Nested NER further deals with entities that can be nested with each other, such as the United States and third president of the United States shown in Figure 1, such phenomenon is quite common in natural language processing (NLP). NER is commonly regarded as a sequence labeling task (Lample et al., 2016; Ma and Hovy, 2016; ∗Corresponding author. This paper was partially supported by National Key Research and Development Program of China (No. 2017YFB0304100), Key Projects of National Natural Science Foundation of China (U1836222 and 61733011), Huawei-SJTU long term AI project, Cutting-edge Machine reading comprehension and language model. V1 V4 V2 V5 V3 Thomas Jefferson, third president of the United States { { { PER GPE V0 V5 V7 Entity Graph GPE PER PER V7 V6 V2 Figure 1: An example of nested named entity mentions. Solid lines connect the starting and ending indices of inner nested entities. Peters et al., 2017). These approaches only work for non-nested entities (or flat entities), but neglect nested entities. There have been efforts to deal with the nested structure. Ju et al. 2018 introduced a layered sequence labeling model to first recognize innermost entities, and then feed them into the next layer to extract outer entities. However, this model suffers from obvious error propagation. The wrong entities extracted by the previous layer will affect the performance of the next layer. Also, such layered model suffers from the sparsity of entities at high levels. For instance, in the well-known ACE2005 training dataset, there are only two entities in the sixth level. Sohrab and Miwa 2018 proposed a region-based method that enumerates all possible regions and classifies their entity types. However, this model may ignore explicit boundary information. Zheng et al. 2019 combined the layered sequence labeling model and region-based method to locate the entity boundary first, and then utilized the region classification model to predict entities. This model, however, cares less interaction among entities located in outer and inner layers. In this paper, we propose a bipartite flat-graph 6409 network (BiFlaG) for nested NER, which models a nested structure containing arbitrary many layers into two parts: outermost entities and inner entities in all remaining layers. For example, as shown in Figure 1, the outermost entity Thomas Jefferson, third president of the United States is considered as a flat (non-nested) entity, while third president of the United States (in the second layer) and the United States (in the third layer) are taken as inner entities. The outermost entities with the maximum coverage are usually identified in the flat NER module, which commonly adopts a sequence labeling model. All the inner entities are extracted through the graph module, which iteratively propagates information between the start and end nodes of a span using graph convolutional network (GCN) (Kipf and Welling, 2017). The benefits of our model are twofold: (1) Different from layered models such as (Ju et al., 2018), which suffers from the constraints of one-way propagation of information from lower to higher layers, our model fully captures the interaction between outermost and inner layers in a bidirectional way. Entities extracted from the flat module are used to construct entity graph for the graph module. Then, new representations learned from graph module are fed back to the flat module to improve outermost entity predictions. Also, merging all the entities located in inner layers into a graph module can effectively alleviate the sparsity of entities in high levels. (2) Compared with region-based models (Sohrab and Miwa, 2018; Zheng et al., 2019), our model makes full use of the sequence information of outermost entities, which take a large proportion in the corpus. The main contributions of this paper can be summarized as follows: • We introduce a novel bipartite flat-graph network named BiFlaG for nested NER, which incorporates a flat module for outermost entities and a graph module for inner entities. • Our BiFlaG fully utilizes the sequence information of outermost entities and meanwhile bidirectionally considers the interaction between outermost and inner layers, other than unidirectional delivery of information. • With extensive experiments on three benchmark datasets (ACE2005, GENIA, and KBP2017), our model outperforms previous state-of-the-art models under the same settings. 2 Model Our BiFlaG includes two subgraph modules, a flat NER module and a graph module to learn outermost and inner entities, respectively. Figure 2 illustrates the overview of our model. For the flat module, we adopt BiLSTM-CRF to extract flat (outermost) entities, and use them to construct the entity graph G1 as in Figure 2. For the graph module, we use GCN which iteratively propagates information between the start and end nodes of potential entities to learn inner entities. Finally, the learned representation from the graph module is further fed back to the flat module for better outermost predictions. 2.1 Token Representation Given a sequence consisting of N tokens {t1, t2, ..., tN}, for each token ti, we first concatenate the word-level and character-level embedding ti = [wi; ci], wi is the pre-trained word embedding, character embedding ci is learned following the work of (Xin et al., 2018). Then we use a BiLSTM to capture sequential information for each token xi = BILSTM(ti). We take xi as the word representation and feed it to subsequent modules. 2.2 Flat NER Module We adopt BiLSTM-CRF architecture (Lample et al., 2016; Ma and Hovy, 2016; Yang and Zhang, 2018; Luo et al., 2020) in our flat module to recognize flat entities, which consists of a bidirectional LSTM (BiLSTM) encoder and a conditional random field (CRF) decoder. BiLSTM captures bidirectional contextual information of sequences and can effectively represent the hidden states of words in context. BiLSTM represents the sequential information at each step, the hidden state h of BiLSTM can be expressed as follows. −→ hi = LSTM(xi, −→h i−1; −→θ ) ←− hi = LSTM(xi, ←−h i−1; ←−θ ) hi = [−→ hi; ←− hi] (1) where −→θ and ←−θ are trainable parameters. −→ hi and ←− hi respectively denote the forward and backward context representations of token ti. The output of BiLSTM H = {h1, h2, ..., hN} is further fed into the CRF layer. CRF (Lafferty et al., 2001) has been widely used in state-of-the-art NER models (Lample et al., 6410 V0 V2 V3 V5 V4 V0 V1 V2 V3 V4 V5 V6 Thomas Jefferson third ... United States ... V0 V1 V6 V5 ... + G1 Token Rep. Flat Graph G2 V0 V1 V2 V3 V5 V4 BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM x h V1 V0 V1 V2 V3 V4 V5 V6 Figure 2: The framework of our BiFlaG model. G1 and G2 are entity graph and adjacent graph created for GCN, each dashed line connects the start and end nodes for a potential entity. Solid red lines indicate inner entities recognized by the graph module. 2016; Ma and Hovy, 2016; Yang and Zhang, 2018) to help make better decisions, which considers strong label dependencies by adding transition scores between neighboring labels. Viterbi algorithm is applied to search for the label sequence with highest probability during the decoding process. For y = {y1, ..., yN} being a sequence of predictions with length N. Its score is defined as follows. s(x, y) = N−1 X i=0 Tyi,yi+1 + N X i=1 Pi,yi (2) where Tyi,yi+1 represents the transmission score from yi to yi+1, Pi,yi is the score of the jth tag of the ith word from BiLSTM encoder. CRF model defines a family of conditional probability p(y|x) over all possible tag sequences y: p(y|x) = exps(x,y) P ˜y∈y exps(x,˜y) (3) during training phase, we consider the maximum log probability of the correct predictions. While decoding, we search the tag sequences with maximum score: y∗= arg max ˜y∈y score(x, ˜y) (4) 2.3 Graph Module Since the original input sentences are plain texts without inherent graphical structure, we first construct graphs based on the sequential information of texts and the entity information from the flat module. Then, we apply GCN (Kipf and Welling, 2017; Qian et al., 2019) which propagates information between neighboring nodes in the graphs, to extract the inner entities. Graph Construction. We create two types of graphs for each sentence as in Figure 2. Each graph is defined as G = (V, E), where V is the set of nodes (words), E is the set of edges. • Entity graph G1: for all the nodes in an extracted entity extracted from the flat module, edges are added between any two nodes eij = (vi, vj), where start ≤i < j ≤end, as shown in Figure 2, allowing the outermost entity information to be utilized. 6411 • Adjacent graph G2: for each pair of adjacent words in the sentence, we add one directed edge from the left word to the right one, allowing local contextual information to be utilized. Bi-GCN. In order to consider both incoming and outgoing features for each node, we follow the work of (Marcheggiani and Titov, 2017; Fu et al., 2019), which uses Bi-GCN to extract graph features. Given a graph G = (V, E), and the word representation X = {x1, x2, ..., xN}, the graph feature f ∈RN×df learned from Bi-GCN is expressed as follows. −→ fi = ReLU( X eij∈E (−→ Wfxj + −→ bf )) ←− fi = ReLU( X eji∈E (←− Wfxj + ←− bf )) fi = [−→ fi; ←− fi] (5) where Wf ∈Rdx×df and bf ∈Rdf are trainable parameters, dx represents the dimension of word representation, df is the hidden size of GCN, ReLU is the non-linear activation function. eij represents the edge outgoing from token ti, and eji represents the edge incoming to token ti. The features of the two graphs are aggregated to get impacts of both graphs f = Wc(f1 ⊕f2) + bc (6) where Wc ∈R2df×df is the weight to be learned, bc ∈Rdf is a bias parameter. f1 and f2 are graph features of G1 and G2, respectively. After getting the graph representation F = {f1, f2, ..., fN} from Bi-GCN, we learn the entity score M ∈RN×N×L for inner layers as Mij = softmax(W3ReLU(W1fi ⊕W2fj)) (7) where W1, W2 ∈Rdf×df/2, W3 ∈Rdf×L, L is the number of entity types. Mij ∈RL represents the type probability for a span starts from token ti and ends at token tj. For inner entities, we define the ground truth entity of word pair (ti, tj) as ˆ Mij, where ti and tj are start and end nodes of a span. Cross Entropy (CE) is used to calculate the loss Linner = −( X ( ˆ Mijlog(Mij)) · I(O)+ λ1 · X ( ˆ Mijlog(Mij)) · (1 −I(O))) (8) Algorithm 1 Bipartite Flat-Graph Algorithm Input: word representations X = {x1, .., xN}, number of entity types L the dimension of word embeddings dx, the hidden size of GCN df Output: all the entities in this sequence 1: for numbers of training iterations do 2: y ←BILSTM-CRF(X) 3: create entity graph G1 based on y 4: FN×df ←BI-GCN(X, G1) 5: MN×N×L ←LINEAR(F × F) 6: transform M to graph G3 by Eq.(10) 7: Xnew ←BI-GCN(X, G3) 8: ynew ←BILSTM-CRF(Xnew) 9: entity set T ←entities in M and ynew 10: end for 11: return entity set T where Mij ∈RL denotes the entity score in the graph module. I(O) is a switching function to distinguish the loss of non-entity ’O’ and other entity types. It is defined as follows. I(O) = ( 1, if type = ’O’ 0, if type ̸= ’O’ (9) λ1 is the bias weight. The larger λ1 is, the greater impacts of entity types, and the smaller influences of non-entity ’O’ on the graph module. 2.4 BiFlaG Training The entity score M in Eq.(7) carries the type probability of each word pair in the sentence. To further consider the information propagation from inner entities to outer ones, we use Bi-GCN to generate new representations from entity score M for the flat module. The largest type score rij of the word pair (ti, tj) indicates whether this span is an entity or non-entity and the confidence score of being such type, which is obtained by a max-pooling operation: rij = ( max(mij), if type ̸= ’O’ 0, if type = ’O’ (10) where type represents the entity type or non-entity ’O’ corresponding to the maximum type score. When the corresponding type is O, there exits no dependencies between ti and tj, thus we set rij to 0. A new graph that carries the boundary information 6412 ACE2005 GENIA Train (%) Dev (%) Test (%) Train (%) Dev (%) Test (%) # sentences 7,285 968 1,058 15,022 1,669 1,854 with o.l. 2,820 (39) 356 (37) 344 (33) 3,432 (23) 384 (23) 467 (25) # mentions 24,827 3,234 3,028 47,027 4,469 5,596 outermost entity 18,656 (75) 2,501 (77) 2,313 (76) 42,558 (90) 4,030 (90) 4,958 (89) inner entity 6,171 (25) 733 (23) 715 (24) 4,469 (10) 439 (10) 642 (11) Table 1: Statistics of the datasets used in our experiments: ACE2005 and KBP2017. o.l.: overlapping mentions. of inner entities is defined as G3 = (V, E), where rij ∈E. The new representation used to update flat module consists of two parts. The first part carries the previous representation of each token α1 i = Wrxi + br (11) where Wr ∈Rdx×df , br ∈Rdf . The second part aggregates inner entity dependencies of the new graph G3 α2 i = BI-GCN(xi, G3) (12) Finally, α1 i and α2 i are added to obtain the new representation xnew i = α1 i + α2 i (13) xnew i is fed into the flat module to update the parameters and extract better outermost entities. For outermost entities, we use the BIOES sequence labeling scheme and adopt CRF to calculate the loss. The losses corresponding to the two representations (X and Xnew) are added together as the outermost loss Louter = CRFX + CRFXnew (14) Entities in the sequence are divided into two disjoint sets of outermost and inner entities, which are modeled by flat module and graph module, respectively. Entities in each module share the same neural network structure. Between two modules, each entity in the flat module is either an independent node, or interacting with one or more entities in the graph module. Therefore, Our BiFlaG is indeed a bipartite graph. Our complete training procedure for BiFlaG is shown in Algorithm 1. 2.5 Loss Function Our BiFlaG model predicts both outermost and inner entities. The total loss is defined as L = Louter + λ2Linner (15) where λ2 is a weight between loss of flat module and graph module. We minimize this total loss during training phase. 3 Experiment 3.1 Dataset and Metric We evaluate our BiFlaG on three standard nested NER datasets: GENIA, ACE2005, and TACKBP2017 (KBP2017) datasets, which contain 22%, 10% and 19% nested mentions, respectively. Table 1 lists the concerned data statistics. GENIA dataset (Kim et al., 2003) is based on the GENIAcorpus3.02p1. We use the same setup as previous works (Finkel and Manning, 2009; Lu and Roth, 2015; Lin et al., 2019a). This dataset contains 5 entity categories and is split into 8.1:0.9:1 for training, development and test. ACE20052 (Walker et al., 2006) contains 7 finegrained entity categories. We preprocess the dataset following the same settings of (Lu and Roth, 2015; Wang and Lu, 2018; Katiyar and Cardie, 2018; Lin et al., 2019a) by keeping files from bn, nw and wl, and splitting these files into training, development and test sets by 8:1:1, respectively. KBP2017 Following (Lin et al., 2019a), we evaluate our model on the 2017 English evaluation dataset (LDC2017E55). The training and development sets contain previous RichERE annotated datasets (LDC2015E29, LDC2015E68, LDC2016E31 and LDC2017E02). The datasets are split into 866/20/167 documents for training, development and test, respectively. Metric Precision (P), recall (R) and F-score (F1) are used to evaluate the predicted entities. An entity is confirmed correct if it exists in the target labels, regardless of the layer at which the model makes this prediction. 1http://www.geniaproject.org/genia-corpus/posannotation 2https://catalog.ldc.upenn.edu/LDC2006T06 (ACE2005) 6413 ACE2005 GENIA KBP2017 Model P R F1 P R F1 P R F1 LSTM-CRF (Lample et al., 2016) 70.3 55.7 62.2 75.2 64.6 69.5 71.5 53.3 61.1 Multi-CRF 69.7 61.3 65.2 73.1 64.9 68.8 69.7 60.8 64.9 layered-CRF (Ju et al., 2018) 74.2 70.3 72.2 78.5 71.3 74.7 LSTM. hyp (Katiyar and Cardie, 2018) 70.6 70.4 70.5 79.8 68.2 73.6 Segm. hyp [POS] (Wang and Lu, 2018) 76.8 72.3 74.5 77.0 73.3 75.1∗ 79.2 66.5 72.3 Exhaustive (Sohrab and Miwa, 2018) 4 73.3 68.3 70.7 Anchor-Region [POS] (Lin et al., 2019a) 76.2 73.6 74.9 75.8 73.9 74.8 77.7 71.8 74.6∗ Merge & Label (Fisher and Vlachos, 2019) 75.1 74.1 74.6† Boundary-aware (Zheng et al., 2019) 75.9 73.6 74.7† GEANN [Gazetter] (Lin et al., 2019b) 77.1 73.3 75.2∗ KBP2017 Overview (Ji et al., 2017) 72.6 73.0 72.8† BiFlaG 75.0 75.2 75.1 77.4 74.6 76.0 77.1 74.3 75.6 (-0.1∗) (+0.9∗) (+1.0∗) Table 2: Experimental results5 on ACE2005, GENIA and KBP2017 datasets. POS and Gazetteer indicates using additional POS tags and gazetteers. † represents previous state-of-the-art results under the same settings with our experiments, ∗represents state-of-the-art results with POS tags or gazetteers, values in parentheses are also compared with them. 3.2 Parameter Settings Our model 3 is based on the framework of (Yang and Zhang, 2018). We conduct optimization with the stochastic gradient descent (SGD) and Adam for flat and GCN modules, respectively. For GENIA dataset, we use the same 200-dimension pretrained word embedding as (Ju et al., 2018; Sohrab and Miwa, 2018; Zheng et al., 2019). For ACE2005 and KBP2017 datasets, we use the publicly available pre-trained 100-dimension GloVe (Pennington et al., 2014) embedding. We train the character embedding as in (Xin et al., 2018). The learning rate is set to 0.015 and 0.001 for flat and GCN modules, respectively. We apply dropout to embeddings and the hidden states with a rate of 0.5. The hidden sizes of BiLSTM and GCN are both set to 256. The bias weights λ1 and λ2 are both set to 1.5. 3.3 Results and Comparisons Table 2 compares our model to some existing state-of-the-art approaches on the three benchmark datasets. Given only standard training data and publicly available word embeddings, the results in Table 2 show that our model outperforms all these models. Current state-of-the-art results on these datasets are tagged with † in Table 2, we make improvements of 0.5/1.3/2.8 F1 on ACE2005, GENIA, and KBP2017 respectively. KBP2017 contains much more entities than ACE2005 and GE3Code is available at: https://github.com/cslydia/BiFlaG. 4This result is reported by (Zheng et al., 2019), consistent with our own re-implemented results. NIA. The number of entities on test set is four times that of ACE2005. Our model has the most significant improvement on such dataset, proving the effectiveness of our BiFlaG model. More notably, our model without POS tags surpasses the previous models (Wang and Lu, 2018; Lin et al., 2019a), which use POS tags as additional representations on all three datasets. Besides, (Lin et al., 2019b) incorporate gazetteer information on ACE2005 dataset, our model also makes comparable results with theirs. Other works like (Strakov´a et al., 2019) 4, which train their model on both training and development sets, are thus not comparable to our model directly. Table 3 makes a detailed comparison on the five categories of GENIA test dataset with a layered model (Ju et al., 2018) and a region-based model (Zheng et al., 2019). Compared with region-based model, layered model seems to have higher precision and lower recall, for they are subject to error propagate, the outer entities will not be identified if the inner ones are missed. Meanwhile, regionbased model suffers from low precision, as they may generate a lot of candidate spans. By contrast, our BiFlaG model well coordinates precision and recall. The entity types Protein and DNA have the most nested entities on GENIA dataset, the improvement of our BiFlaG on these two entity types is remarkable, which can be attributed to the in4Their reported results are 75.36 and 76.44 trained on concatenated train+dev sets on ACE2005 and GENIA, respectively. They also use lemmas and POS tags as additional features. 6414 Our model Boundary-aware Layered-CRF Category P R F P R F P R F Num. DNA 72.7 72.7 72.7 73.6 67.8 70.6 74.4 69.7 72.0 1,290 RNA 84.4 84.4 84.4 82.2 80.7 81.5 90.3 79.5 84.5 117 Protein 79.5 76.5 78.0 76.7 76.0 76.4 80.5 73.2 76.7 3,108 Cell Line 75.9 67.6 71.5 77.8 65.8 71.3 77.8 65.7 71.2 462 Cell Type 76.7 72.4 74.4 73.9 71.2 72.5 76.4 68.1 72.0 619 Overall 77.4 74.6 76.0 75.8 73.6 74.7 78.5 71.3 74.7 5,596 Table 3: Our results on five categories compared to (Zheng et al., 2019) and (Ju et al., 2018) on GENIA dataset. teraction of nested information between the two subgraph modules of our BiFlaG. 3.4 Analysis of Each Module Table 4 evaluates the performance of each module on ACE2005 and GENIA datasets. Our flat module performs well on both datasets for outermost entity recognition. However, the recall of the inner entities is low on GENIA dataset. According to the statistics in Table 1, only 11% of the entities on GENIA are located in inner layers, while on ACE2005 dataset, the proportion is 24%. It can be inferred that the sparsity of the entity distribution in inner layers has a great impact on the results. If these inner entities are identified at each layer, the sparsity may be even worse. We can enhance the impact of sparse entities by increasing the weight λ1 in Eq.(14), but this may hurt precision, we set λ1 = 1.5 to have a better tradeoff between precision and recall. ACE2005 GENIA P R F P R F Outermost 73.7 75.0 74.3 78.4 78.9 78.7 Inner 58.3 55.2 56.7 50.9 34.7 41.2 Table 4: Performance of each module on ACE2005 and GENIA datasets. 3.5 Analysis of Entity Length We conduct additional experiments on ACE2005 dataset to detect the effect of the lengths of the outermost entities on the extraction of their inner entities as shown in Table 6. Our flat module can well predict outermost entities which account for a large proportion among all types of entities. In general, the performance of inner entities is affected by the extracting performance and length of their outermost entities. A shorter outermost entity is more likely to have its inner entities shared either the ACE2005 GENIA KBP2017 Flat →Grpah no graph 73.4 74.4 74.0 adjacent graph 73.8 74.9 74.7 entity graph 74.8 75.5 75.2 both graphs 75.1 76.0 75.6 Graph →Flat without 74.3 74.5 75.1 with 75.1 76.0 75.6 Table 5: Ablation study on the three benchmark datasets. first token or the last token, making the constructed graph more instructive, thus its inner entities are easier to extract. 3.6 Ablation Study In this paper, we use the interactions of flat module and graph module to respectively help better predict outermost and inner entities. We conduct ablation study to verify the effectiveness of the interactions. The first part is the information delivery from the flat module to the graph module. We conduct four experiments: (1) no graph: we skip Eq. (5)-(6) and let graph feature f = LINEAR(x). In this case, inner entities are independent of the outermost entities and only rely on the word representation (section 2.1) which carries contextualized information. (2) adjacent graph: we further utilize the sequential information of the text to help inner entity prediction. (3) entity graph: the boundary information of outer entities can be indicative for inner entities, we construct an entity graph based on the entities extracted by the flat module. (4) both graphs: when outer entities are not recognized by the flat module, their inner entities will fail to receive the boundary information, we use the sequential information of the text to make up for the deficiency of using only entity graph. Experimental 6415 length outermost entities inner entities P R F Num. P R F Num. 1 75.9 80.6 78.2 1,260 2 72.1 74.8 73.4 488 76.6 63.6 69.5 77 3 67.8 72.2 69.9 198 67.6 56.5 61.5 85 4 62.5 60.9 61.7 112 68.1 42.3 52.2 111 5 60.7 48.7 54.0 76 56.0 37.8 45.1 74 6 46.3 46.3 46.3 41 28.0 25.9 26.9 54 7 44.4 30.8 36.4 26 21.7 16.7 18.9 30 8 64.3 40.9 50.0 22 31.8 21.2 25.5 33 9 35.7 31.3 33.3 16 23.1 19.4 21.1 31 10 57.1 22.2 32.0 18 20.0 15.4 17.4 26 Table 6: Length-wise results on ACE2005 test dataset. results show that entity graph carries more useful information than adjacent graph, which enhances the baseline by 1.4/1.1/1.2 F1 score, respectively. By combing these two graphs together, we get a larger gain of 1.7/1.6/1.6 F1 score. The second part is the information delivery from the graph module to the flat module, the new representation Xnew learned from graph module is propagated back to the flat module. Xnew is equipped with the dependencies of inner entities and shows useful, yielding an improvement of 0.8/1.5/0.5 F1 for the three benchmarks, respectively. 3.7 Inference Time We examine the inference speed of our BiFlaG with (Zheng et al., 2019), (Sohrab and Miwa, 2018) and (Ju et al., 2018) in terms of the number of words decoded per second. For all the compared models, we use the re-implemented code released by (Zheng et al., 2019) and set the same batch size 10. Compared with (Zheng et al., 2019) and (Sohrab and Miwa, 2018), our BiFlaG does not need to compute region representation for each potential entity, thus we can take full advantage of GPU parallelism. Compared with (Ju et al., 2018), which requires CRF decoding for each layer, our model only needs to calculate two modules, by contrast, the cascaded CRF layers limit their inference speed. 4 Case Study Table 7 shows a case study of each module in our model. In this example, entities my, my town, that and Krispy Kreme are nested in the entity the location in my town that was recently abandoned by Krispy Kreme. Our BiFlaG model successfully Inference Speed (t/s) 0 1000 2000 3000 4000 5000 6000 7000 6708 4751 2851 3563 BiFlaG (Ours) Zheng et al., 2019 Sohrab and Miwa, 2018 Ju et al., 2018 Figure 3: The inference speed of our BiFlaG and compared models on GENIA test set. t/s indicates token per second. extracts all these entities with exact boundaries and entity categorical labels. Without graph construction, nested entities my town, that and Krispy Kreme are not identified. Without interaction between the two modules, the outermost entity the location in my town that was recently abandoned by Krispy Kreme is mislabeled as LOC (location), which is actually a FAC (Facility) type, inner nested entities my, my town and Krispy Kreme are not propagated back to the flat module, which maybe helpful to correct the extracting of the outermost entity. 5 Related Work Recently, with the development of deep neural network in a wide range of NLP tasks (Bai and Zhao, 2018; Huang et al., 2018; Huang and Zhao, 2018; He et al., 2018, 2019; Li et al., 2018a,b, 2019; Zhou and Zhao, 2019; Xiao et al., 2019; Zhang and Zhao, 6416 Setence Interesting aside: Starbucks is taking over the location in my town that was recently abandoned by Krispy Kreme. Gold Label ORG: {Starbucks, Krispy Kreme}; FAC: {the location in my town that was recently abandoned by Krispy Kreme; that}; GPE: {my town}; PER: {my} No Graph ORG: {Starbucks}; LOC: {the location in my town that was recently abandoned by Krispy Kreme}; PER: {my} No interaction ORG: {Starbucks, Krispy Kreme}; LOC: {the location in my town that was recently abandoned by Krispy Kreme}; GPE: {my town}; PER: {my} BiFlaG ORG: {Starbucks, Krispy Kreme }; FAC: {the location in my town that was recently abandoned by Krispy Kreme; that}; GPE: {my town}; PER: {my} Table 7: An example of predicted results in ACE2005 test dataset. 2018; Zhang et al., 2019, 2020a,b,c), it is possible to build reliable NER systems without hand-crafted features. Nested named entity recognition requires to identity all the entities in texts that may be nested with each other. Though NER is a traditional NLP task, it is not until the very recent years that researches have been paid to this nested structure for named entities. (Lu and Roth, 2015) introduce a novel hypergraph representation to handle overlapping mentions. (Muis and Lu, 2017) further develop a gapbased tagging schema that assigns tags to gaps between words to address the spurious structures issue, which can be modeled using conventional linear-chain CRFs. However, it suffers from the structural ambiguity issue during inference. (Wang and Lu, 2018) propose a novel segmental hypergraph representation to eliminate structural ambiguity. (Katiyar and Cardie, 2018) also propose a hypergraph-based approach based on the BILOU tag scheme that utilizes an LSTM network to learn the hypergraph representation in a greedy manner. Stacking sequence labeling models to extract entities from inner to outer (or outside-to-inside) can also handle such nested structures. (Alex et al., 2007) propose several different modeling techniques (layering and cascading) to combine multiple CRFs for nested NER. However, their approach cannot handle nested entities of the same entity type. (Ju et al., 2018) dynamically stack flat NER layers, and recognize entities from innermost layer to outer ones. Their approach can deal with nested entities of the same type, but suffers from error propagation among layers. Region-based approaches are also commonly used for nested NER by extracting the subsequences in sentences and classifying their types. (Sohrab and Miwa, 2018) introduce a neural exhaustive model that considers all possible spans and classify their types. This work is further improved by (Zheng et al., 2019), which first apply a single-layer sequence labeling model to identify the boundaries of potential entities using context information, and then classify these boundary-aware regions into their entity type or non-entity. (Lin et al., 2019a) propose a sequence-to-nuggets approach named as Anchor-Region Networks (ARNs) to detect nested entity mentions. They first use an anchor detector to detect the anchor words of entity mentions and then apply a region recognizer to identity the mention boundaries centering at each anchor word. (Fisher and Vlachos, 2019) decompose nested NER into two stages. Tokens are merged into entities through real-valued decisions, and then the entity embeddings are used to label the entities identified. 6 Conclusion This paper proposes a new bipartite flat-graph (BiFlaG) model for nested NER which consists of two interacting subgraph modules. Applying the divideand-conquer policy, the flat module is in charge of outermost entities, while the graph module focuses on inner entities. Our BiFlaG model also facilitates a full bidirectional interaction between the two modules, which let the nested NE structures jointly learned at most degree. As a general model, our BiFlaG model can also handle non-nested structures by simply removing the graph module. In terms of the same strict setting, empirical results show that our model generally outperforms previous state-of-the-art models. 6417 References Beatrice Alex, Barry Haddow, and Claire Grover. 2007. Recognising nested named entities in biomedical text. In Biological, translational, and clinical language processing, pages 65–72. Hongxiao Bai and Hai Zhao. 2018. Deep enhanced representation for implicit discourse relation recognition. In Proceedings of the 27th International Conference on Computational Linguistics, pages 571– 583. Jenny Rose Finkel and Christopher D. Manning. 2009. Nested named entity recognition. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 141–150. Joseph Fisher and Andreas Vlachos. 2019. Merge and label: A novel neural network architecture for nested NER. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5840–5850. Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. 2019. GraphRel: Modeling text as relational graphs for joint entity and relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1409–1418. Shexia He, Zuchao Li, and Hai Zhao. 2019. Syntaxaware multilingual semantic role labeling. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5350–5359. Shexia He, Zuchao Li, Hai Zhao, and Hongxiao Bai. 2018. Syntax for semantic role labeling, to be, or not to be. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2061–2071. Yafang Huang, Zuchao Li, Zhuosheng Zhang, and Hai Zhao. 2018. Moon IME: Neural-based Chinese pinyin aided input method with customizable association. In Proceedings of ACL 2018, System Demonstrations, pages 140–145. Yafang Huang and Hai Zhao. 2018. Chinese pinyin aided IME, input what you have not keystroked yet. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2923–2929. Heng Ji, Xiaoman Pan, Boliang Zhang, Joel Nothman, James Mayfield, Paul McNamee, and Cash Costello. 2017. Overview of tac-kbp2017 13 languages entity discovery and linking. Theory and Applications of Categories. Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1446–1459. Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 861–871. J-D Kim, Tomoko Ohta, Yuka Tateisi, and Jun’ichi Tsujii. 2003. Genia corpus—a semantically annotated corpus for bio-textmining. Bioinformatics, pages i180–i182. Thomas N Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR). John D. Lafferty, Andrew Mccallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In International Conference on Machine Learning. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. Zuchao Li, Jiaxun Cai, Shexia He, and Hai Zhao. 2018a. Seq2seq dependency parsing. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3203–3214. Zuchao Li, Shexia He, Jiaxun Cai, Zhuosheng Zhang, Hai Zhao, Gongshen Liu, Linlin Li, and Luo Si. 2018b. A unified syntax-aware framework for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2401–2411. Zuchao Li, Shexia He, Hai Zhao, Yiqing Zhang, Zhuosheng Zhang, Xi Zhou, and Xiang Zhou. 2019. Dependency or span, end-to-end uniform semantic role labeling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6730–6737. Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2019a. Sequence-to-nuggets: Nested entity mention detection via anchor-region networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5182–5192. Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun, Bin Dong, and Shanshan Jiang. 2019b. Gazetteerenhanced attentive neural networks for named entity recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 6232–6237. 6418 Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 857– 867. Ying Luo, Fengshun Xiao, and Hai Zhao. 2020. Hierarchical contextualized representation for named entity recognition. In the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-2020). Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1506–1515. Aldrian Obaja Muis and Wei Lu. 2017. Labeling gaps between words: Recognizing overlapping mentions with mention separators. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2608–2618. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Matthew Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1756–1765. Yujie Qian, Enrico Santus, Zhijing Jin, Jiang Guo, and Regina Barzilay. 2019. GraphIE: A graph-based framework for information extraction. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 751–761. Mohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2843–2849. Jana Strakov´a, Milan Straka, and Jan Hajic. 2019. Neural architectures for nested NER through linearization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326–5331. Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia, 57. Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 204–214. Fengshun Xiao, Jiangtong Li, Hai Zhao, Rui Wang, and Kehai Chen. 2019. Lattice-based transformer encoder for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3090–3097. Yingwei Xin, Ethan Hart, Vibhuti Mahajan, and JeanDavid Ruvini. 2018. Learning better internal structure of words for sequence labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2584–2593. Jie Yang and Yue Zhang. 2018. NCRF++: An opensource neural sequence labeling toolkit. In Proceedings of ACL 2018, System Demonstrations, pages 74–79. Shuailiang Zhang, Hai Zhao, Yuwei Wu, Zhuosheng Zhang, Xi Zhou, and Xiang Zhou. 2020a. DCMN+: Dual co-matching network for multi-choice reading comprehension. In the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-2020). Zhuosheng Zhang, Yafang Huang, and Hai Zhao. 2019. Open vocabulary learning for neural Chinese pinyin IME. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1584–1594. Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. 2020b. Semantics-aware BERT for language understanding. In the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-2020). Zhuosheng Zhang, Yuwei Wu, Junru Zhou, Sufeng Duan, and Hai Zhao. 2020c. SG-Net: Syntaxguided machine reading comprehension. In the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-2020). Zhuosheng Zhang and Hai Zhao. 2018. One-shot learning for question-answering in Gaokao history challenge. In Proceedings of the 27th International Conference on Computational Linguistics, pages 449– 461. Changmeng Zheng, Yi Cai, Jingyun Xu, Ho-fung Leung, and Guandong Xu. 2019. A boundary-aware neural model for nested named entity recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 357– 366. Junru Zhou and Hai Zhao. 2019. Head-driven phrase structure grammar parsing on Penn treebank. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2396– 2408.
2020
571
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6419–6428 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6419 Connecting Embeddings for Knowledge Graph Entity Typing Yu Zhao1,∗, Anxiang Zhang2,*, Ruobing Xie3, Kang Liu4,5, Xiaojie Wang6 1Fintech Innovation Center, School of Economic Information Engineering, Southwestern University of Finance and Economics, Chengdu, China 2School of Computer Science, Carnegie Mellon University, Pittsburgh, USA 3WeChat Search Application Department, Tencent, Beijing, China 4National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China 5University of Chinese Academy of Sciences, Beijing, 100049, China 6 School of Computer Science, Beijing University of Posts and Telecommunications, Beijing, China Abstract Knowledge graph (KG) entity typing aims at inferring possible missing entity type instances in KG, which is a very significant but still under-explored subtask of knowledge graph completion. In this paper, we propose a novel approach for KG entity typing which is trained by jointly utilizing local typing knowledge from existing entity type assertions and global triple knowledge from KGs. Specifically, we present two distinct knowledge-driven effective mechanisms of entity type inference. Accordingly, we build two novel embedding models to realize the mechanisms. Afterward, a joint model with them is used to infer missing entity type instances, which favors inferences that agree with both entity type instances and triple knowledge in KGs. Experimental results on two real-world datasets (Freebase and YAGO) demonstrate the effectiveness of our proposed mechanisms and models for improving KG entity typing. The source code and data of this paper can be obtained from: https://github.com/ Adam1679/ConnectE 1 Introduction The past decade has witnessed great thrive in building web-scale knowledge graphs (KGs), such as Freebase (Bollacker et al., 2008), YAGO (Suchanek et al., 2007), Google Knowledge Graph (Dong et al., 2014), which usually consists of a huge amount of triples in the form of (head entity, relation, tail entity) (denoted (e, r, ˜e)). KGs usually suffer from incompleteness and miss important facts, jeopardizing their usefulness in downstream tasks such as question answering (Elsahar et al., 2018), semantic parsing (Berant et al., 2013), relation classification (Zeng et al., 2014). Hence, the task of ∗Equal Contribution. Corresponding author: Y. Zhao ([email protected]). Figure 1: Effective mechanisms of entity type inference with local typing knowledge and global triple knowledge. knowledge graph completion (KGC, i.e. completing knowledge graph entries) is extremely significant and attracts wide attention. This paper concentrates on KG entity typing, i.e. inferring missing entity type instances in KGs, which is an important sub-problem of KGC. Entity type instances, each of which is in the formed of (entity, entity type) (denoted (e, t)), are essential entries of KGs and widely used in many NLP tasks such as relation extraction (Zhang et al., 2018; Jain et al., 2018), coreference resolution (Hajishirzi et al., 2013), entity linking (Gupta et al., 2017). Most previous works of KGC focus on inferring missing entities and relationships (Bordes et al., 2013; Wang et al., 2014; Lin et al., 2015; Dettmers et al., 2017; Ding et al., 2018; Nathani et al., 2019), paying less attention to entity type prediction. However, KGs also usually suffer from entity types incompleteness. For instance, 10% of entities in FB15k (Bordes et al., 2013), which have the /music/artist type, miss the /people/person type (Moon et al., 2017). KG entity type incompleteness leads to some type-involved algorithms in KG-driven tasks grossly inefficient or even unavailable. To solve KG entity type incompleteness issue, in this paper we propose a novel embedding methodology to infer missing entity type instances that employs not only local typing knowledge from entity type assertions, as most conventional mod6420 els do, but also leverages global triple knowledge from KGs. Accordingly, we build two distinct knowledge-driven type inference mechanisms with these two kinds of structural knowledge. Mechanism 1. Missing entity types of an entity can be found from other entities that are close to the entity in the embedding space, using local typing knowledge as in Fig.1(Mech.1). Mechanism 2. Missing entity types of an (head or tail) entity can be inferred from the types of other (tail or head) entities through their relationships, using global triple knowledge as in Fig.1(Mech.2). The main idea behind Mech.1 is based on the observation that the learned entities’ embeddings by conventional KG embedding methods (Ji et al., 2016; Xie et al., 2016) cluster well according to their types in vector space. For instance, in Fig.1(Mech.1), given an entity Barack Obama, it’s missing hierarchical type /people/person can be induced by the given hierarchical type of similar entity Donald Trump. In addition, the key motivation behind Mech.2 is that the relationship shall remain unchanged if the entities in a triple fact are replaced with their corresponding hierarchical types. For instance, given a global triple fact (Barack Obama, born in, Honolulu), under this assumption, we can induce a new type triple (/people/person, born in, /location/location)1. Formally, ⃗ Honolulu − ⃗ Barack Obama = ⃗ /location/location − ⃗ /people/person (= ⃗ born in), which can be used to infer missing entity types, e.g. (Barack Obama, type=? ) via ⃗ Barack Obama − ⃗ Honolulu + ⃗ /location/location = ⃗ /people/person, as Mech.2 does. Fig.1 demonstrates a simple illustration of effective mechanisms of entity type inference. Both mechanisms are utilized to build our final composite model. Specifically, we build two embedding models to realize the two mechanisms respectively. First, considering entities and entity types are completely distinct objects, we build two distinct embedding spaces for them, i.e., entity space and entity type space. Accordingly, we encode (e, t) entity type instance by projecting the entity from entity space to entity type space with mapping matrix M, hence we have (1): M · e ≃t , called E2T. Moreover, we learn the plausibility of (te, r, t˜e) global type triple by newly generalizing from (e, r, ˜e) global 1For more clarity, we represent it as (/location/location, born in−1, /people/person) in Fig.1(Mech.2). triple fact, even though this type triple is not present originally. Following translating assumption (Bordes et al., 2013), we have (2): t˜e −r◦≃te , called TRT. E2T and TRT are the implementation models of the two mechanisms. Fig.2 demonstrates a brief illustration of our models. A ranking-based embedding framework is used to train our models. Thereby, entities, entity hierarchical types, and relationships are all embedded into low-dimensional vector spaces, where the composite energy score of both E2T and TRT are computed and utilized to determine the optimal types for (entity, entity type=?) incomplete assertions. The experimental results on real-world datasets show that our composite model achieves significant and consistent improvement compared to all baselines in entity type prediction and achieves comparable performance in entity type classification. Our contributions are as follows: • We propose a novel framework for inferring missing entity type instances in KGs by connecting entity type instances and global triple information and correspondingly present two effective mechanisms. • Under these mechanisms, we propose two novel embedding-based models: one for predicting entity types given entities and another one to encode the interactions among entity types and relationships from KGs. A combination of both models are utilized to conduct entity type inference. • We conduct empirical experiments on two real-world datasets for entity type inference, which demonstrate our model can successfully take into account global triple information to improve KG entity typing. 2 Related Works Entity typing is valuable for many NLP tasks (Yaghoobzadeh et al., 2018), such as knowledge base population (Zhou et al., 2018), question answering (Elsahar et al., 2018), etc. In recent years, researchers attempt to mine fine-grained entity types (Yogatama et al., 2015; Choi et al., 2018; Xu and Barbosa, 2018; Yuan and Downey, 2018) with external text information, such as web search query logs (Pantel et al., 2012), the textual surface patterns (Yao et al., 2013), context representation (Abhishek et al., 2017), Wikipedia (Zhou et al., 6421 Table 1: Entity type embedding models. Models Energy function Parameters Sources Training strategy Se2t(e, t) Striple(·) LM (Neelakantan et al., 2015) e⊤t N/A e, t ∈Rκ entity type instances N/A PEM (Neelakantan et al., 2015) e⊤UV⊤t N/A e ∈Rκ, t ∈Rℓ, U ∈Rκ×d,V ∈Rℓ×d entity type instance N/A RESCAL (Nickel et al., 2011) N/A e⊤Mr ˜e e, ˜e ∈Rκ, Mr ∈Rκ×κ mixed triple knowledge syn. RESCAL-ET (Moon et al., 2017) ∥e −t∥1 e⊤Mr ˜e e, ˜e, t ∈Rκ, Mr ∈Rκ×κ entity type inst./ triple know. asyn. HOLE (Nickel et al., 2016) N/A r⊤(e ⋆˜e) e, r, ˜e ∈Rκ mixed triple knowledge syn. HOLE-ET (Moon et al., 2017) ∥e −t∥1 r⊤(e ⋆˜e) e, r, ˜e, t ∈Rκ entity type inst./ triple know. asyn. TransE (Bordes et al., 2013) N/A ∥e + r −˜e∥ e, r, ˜e ∈Rκ mixed triple knowledge syn. TransE-ET (Moon et al., 2017) ∥e −t∥1 ∥e + r −˜e∥ e, r, ˜e, t ∈Rκ entity type inst./ triple know. asyn. ETE (Moon et al., 2017) ∥e −t∥1 ∥e + ˜e + c −r∥ e, r, ˜e, c, t ∈Rκ entity type inst./ triple know. asyn. ConnectE (our proposed) ∥M · e −t∥2 2 ∥e + r⋆−˜e∥2 2 , ∥te + r◦−t˜e∥2 2 e, r⋆∈Rκ, t, r◦∈Rℓ, M ∈Rℓ×κ entity type inst./ triple know. syn. 2018). Despite their success, existing methods rely on additional external sources, which might not be feasible for some KGs. To be more universal, Neelakantan et al. (2015) propose two embedding models, i.e. linear model (LM) and projection embedding model (PEM), which can infer missing entity types only with KG itself. Although PEM has more expressive power than LM, however, both of them ignore global triple knowledge, which could also be helpful for encoding entity type assertions via shared entities’ embeddings. To address this issue, Moon et al. (2017) propose a state-of-the-art model (ETE) to combine triple knowledge and entity type instances for entity type prediction, and build two entity type embedding methodologies: (1) Synchronous training: treat (entity, entity type) assertions as special triple facts that have a unique relationship “rdf:type”, e.g. (Barack Obama, “rdf:type”, person), and encode all mixed triple facts (original triple data fused with all generated special ones) by conventional entity relation embedding models, such as RESCAL (Nickel et al., 2011), HOLE (Nickel et al., 2016) and TransE (Bordes et al., 2013). (2) Asynchronous training: first learn the entities’ embeddings e by conventional entity relation embedding models mentioned above, and then only update entity types’ embeddings t for min ∥e −t∥ℓ1 while keeping e fixed, called RESCAL-ET, HOLE-ET, TransE-ET and ETE. Although these approaches expect to explore global triple knowledge for entity type prediction, they still lack of expressive ability due to its simplicity of embeddings. In addition, they irrationally assume both the embeddings of entities and entity types being in the same latent space (∈Rκ). Since entities and entity types are completely distinct objects, it may not be reasonable to represent them in a common semantic space. In this paper, we introduce an enhanced KG entity type embedding model with better expressing and reasoning capability considering both local entity typing information and global triple knowledge in KGs. Note that incorporating more external information (Jin et al., 2018; Neelakantan et al., 2015) is not the main focus in this paper, as we only consider the internal structural information in KGs instead, which correspondingly makes our work much more challenging but also more universal and flexible due to the limited information. Recently, (Lv et al., 2018; Hao et al., 2019) also attempt to embedding structural information in KG. However, the goals and models are very different from ours. They encodes the concepts, not hierarchical types. On the contrary, we focus on the latter not the former. Table 1 summarizes the energy functions and other different settings of entity type embedding models. 3 Embedding-based Framework We consider a KG containing entity type instances of the form (e, t) ∈H (H is the training set consists of lots of (entity, entity type) assertions), where e ∈E (E is the set of all entities) is an entity in the KG with the type t ∈T (T is the set of all types). For example, e could be Barack Obama and t could be /people/person. As a single entity can have multiple types, entities in KG often miss some of their types. The aim of this work is to infer missing entity type instances in KGs. Our work concerns energy-based methods, 6422 Figure 2: Simple illustration of E2T and TRT. which learn low-dimensional vector representations (embeddings) of atomic symbols (i.e. entities, entity hierarchical types, relationships). In this framework, we learn two submodels: (1) one for predicting entity types given entities, and (2) another one to encode the interactions among entity types and relationships from KGs. The joint action of both models in prediction allows us to use the connection between triple knowledge and entity type instances to perform KG entity typing. 3.1 E2T: Mapping Entities to Types The first model (E2T) of the framework concerns the learning of a function Se2t(e, t) with local typing knowledge from entity type instances, which is designed to score the similarity of an entity e and a type t. The main ideas behind this model are as follows: (1) Since the learned entity embeddings cluster well when they have the same or similar types, therefore, it is rather intuitive that the entity type embedding represents the projective common concept representation of a cluster of entities, i.e., fproj(e) ≃te, ∀e ∈E. e (∈Rκ) is the embedding of the entity e, te (∈Rℓ) is the embedding of the type te. The entity type embedding represents common information of their entities, it thus should have fewer variates, i.e., ℓ< κ. (2) Since the entities and entity types are totally distinct objects, we respectively build two embedding space for them, i.e., entity space and entity type space. (3) Inspired by the previous work TranSparse (Ji et al., 2016) projecting entities from entity space to relation space with operation matrix M, which we adapted, replacing relation space with entity type space, we thus define fproj(e) = M · e (≃te). Therefore, this model consists of first projecting entity embedding into entity type space, and then computing a similarity measure between this projection and an entity type embedding. The scoring function of E2T given (e, t) is: Se2t(e, t) = ∥M · e −t∥2 ℓ2 , (1) where M ∈Rℓ×κ is a transfer matrix mapping entity embeddings into entity type space. The score is expected to be lower for a golden entity type instance and higher for an incorrect one. 3.2 TRT: Encoding Triples in KGs Using only entity type instances for training ignores much of relational knowledge that can leverage from triple facts in KGs. In order to connect this relational data with our model, we propose to learn entity type and relationship embeddings from global triple knowledge from KGs. The key motivations behind this model are: (1) As mentioned above, the entities cluster well according to their types. Therefore, we believe that an essential premise of a triple (head entity, relationship, tail entity) holds is that its corresponding entity types should first conform to this relationship. Hence, we can build a new entity type triple (head type, relationship, tail type) by replacing both head entity and tail entity with their corresponding types: i.e. (e, r, ˜e) replace −→(te, r, t˜e). (e, r, ˜e) ∈D, D is the training set consists of a lot of triples. r ∈R (R is the set of relationships). te and t˜e stand for the hierarchical types of left entity e and right entity ˜e respectively. (2) Since the relationship r remains unchanged in replacement, we build two differentiated embeddings for the i-th relationship ri in two embedding spaces: r⋆ i (∈Rκ) in entity space and r◦ i (∈Rℓ) in entity type space. (3) Given entity type triple (te, r, t˜e), under translation assumption 2 as in (Bordes et al., 2013), we have: t˜e −r◦≃te. Hence, the scoring function is defined as: Strt(te, r, t˜e) = ∥te + r◦−t˜e∥2 ℓ2 , (2) where te, r◦, t˜e ∈Rℓ. The model returns a lower score if the two entity types is close under this relationship and a higher one otherwise. Fig.2 shows an illustration of E2T and TRT. 3.3 Implementation for Entity Type Prediction Our framework can be used for entity type prediction in the following way. First, for each entity e 2We chose TransE in this paper, and it is not difficult for other enhanced translation-based methods to model triple knowledge, such as Trans(H, R, D and G) (Wang et al., 2017). 6423 that appears in the testing set, a prediction by E2T is performed with: ˆte = arg min t∈T Se2t(e, t). (3) In addition, a composite score (E2T+TRT) by connecting entity type instances and entity type triples with embedding model, which we call ConnectE 3, is defined as follows: Se2t+trt(e, te) = λ · Se2t(e, te)+ (1 −λ) · n 1 |P| X t˜e∈P Strt(te, r, t˜e) + 1 |Q| X t¯e∈Q Strt(t¯e, r, te) o , where λ is a hyperparameter for the trade-off. P = {t˜e|t˜e ∈T , (e, r, ˜e) ∈D} (i.e. given e is head entity, P is the set of all corresponding tail entities’ types.), and Q = {t¯e|t¯e ∈T , (¯e, r, e) ∈D} (i.e. given e is tail entity, Q is the set of all corresponding head entities’ types.). |P| and |Q| represent the total number of entity types in P and Q respectively. A prediction is performed with: ˆte = arg min te∈T Se2t+trt(e, te). (4) Hence, our final composite model ConnectE(E2T+TRT) favors predictions that agree with both entity type instances and global triple information in KGs. 3.4 Optimization We use ranking loss algorithm for training ConnectE-(E2T+TRT), in which the parameter set Θ = {E, T, R⋆, R◦, M}. E, T stand for the collection of all entities’ and types’ embeddings respectively. (R⋆, R◦) denotes the collections of relationships’ differentiated embeddings. The ranking objectives are designed to assign lower scores to true facts (including (e, r, ˜e) triple facts, (e, t) entity type instances and (te, r, t˜e) type triples) versus any corrupt ones. We build three sub-objective functions, i.e., J1, J2, J3, and implement dynamic optimization strategy, i.e., fix a partial of parameters and only update the rest when minimizing each function. (1) J1: We choose TransE (see Bordes et al. (2013)) to model triple facts as S(e, r, ˜e), in which we update the embeddings of entities (∀e ∈E) and the embeddings of relationships 3We also call it ConnectE-(E2T+TRT), and use ConnectE(E2T+0) to denote E2T for uniformity in the experiments. (∀r⋆∈R⋆). (2) J2: We only update the embeddings of entity types (∀t ∈T) and projecting matrix M, not the entities’ embeddings that have been trained in J1. (3) J3: We only update the embeddings of relationships (∀r◦∈R◦) while keeping the entity types’ embeddings fixed. The training is performed using Adagrad (Kingma and Ba, 2014). All embeddings in Θ are initialized with uniform distribution. The procedure, from J1, J2 to J3, is iterated for a given number of iterations. We have: J1 = X D X D′ [γ1 + S(e, r, ˜e) −S(e′, r, ˜e′)]+ , J2 = X H X H′ [γ2 + Se2t(e, te) −Se2t(e′, t′ e)]+ , J3 = X Z X Z′ [γ3 + Strt(te, r, t˜e) −Strt(t′ e, r, t′ ˜e)]+ γ1, γ2, γ3 > 0 are margin hyperparameters, and the corrupted datasets are built as follows: D′ :={(e′, r, ˜e)|(e, r, ˜e) ∈D, e′ ∈E, e′ ̸= e} ∪{(e, r, ˜e′)|(e, r, ˜e) ∈D, ˜e′ ∈E, ˜e′ ̸= ˜e} , H′ :={(e′, te)|(e, te) ∈H, e′ ∈E, e′ ̸= e} ∪{(e, t′ e)|(e, te) ∈H, t′ e ∈T , t′ e ̸= te} , Z′ :={(t′ e, r, t˜e)|(te, r, t˜e) ∈Z, t′ e ∈T , t′ e ̸= te} ∪{(te, r, t′ ˜e)|(te, r, t˜e) ∈Z, t′ ˜e ∈T , t′ ˜e ̸= t˜e} D, H are training datasets of triple facts and entity type instances in KG. Z is the training data of type triples, built by replacing entities in D with their corresponding entity types. 4 Experiments 4.1 Datasets We conduct the experiments on two real-world datasets (D) widely used in KG embedding literature, i.e. FB15k (Bordes et al., 2013) and YAGO43k (Moon et al., 2017), which are subsets of Freebase (Bollacker et al., 2008) and YAGO (Suchanek et al., 2007) respectively. They consist of triples, each of which is formed as (left entity, relationship, right entity). We utilize two entity type data (H, each of it is formed as (entity, entity type)) built in (Moon et al., 2017), called FB15kET and YAGO43kET, in which the entity types are mapped to entities from FB15k and YAGO43k respectively. Moreover, we build new type triple datasets (Z, each one in it is formed as (head type, relationship, tail type)), to train our model. They are built based on D and H. First, for each triple (e, r, ˜e) ∈D, we replace the head and the tail with their types 6424 according to H. The generated datasets are called FB15kTRT(full) and YAGO43kTRT(full). Second, considering about the scalability of the proposed approach for full KGs, we further modify the generation method of type triples, which is the major training bottleneck. We discard newly generated ones with low-frequency (i.e. #frequency = 1). After that the size of both FB15kTRT(full) and YAGO43kTRT(full) decreased by about 90%, called FB15kTRT(disc.) and YAGO43kTRT(disc.) respectively. The statistics of the datasets are showed in Table 2. For saving space, we put more data processing details (include cleaning H, building Z, etc.) on our github website. Table 2: Statistics of D, H, Z. Dataset #Ent #Rel #Train #Valid #Test FB15k 14,951 1,345 483,142 50,000 59,071 YAGO43k 42,335 37 331,687 29,599 29,593 Dataset #Ent #Type #Train #Valid #Test FB15kET 14,951 3,851 136,618 15,749 15,780 YAGO43kET 41,723 45,182 375,853 42,739 42,750 Dataset #Type #Rel #Train Valid Test FB15kTRT(full) 3,851 1,345 2,015,338 – – FB15kTRT(disc.) 2,060 614 231,315 – – YAGO43kTRT(full) 45,128 37 1,727,708 – – YAGO43kTRT(disc.) 17,910 32 189,781 – – 4.2 Entity Type Prediction This task concentrates to complete a pair (entity, entity type) when its type is missing, which aims to verify the capability of our model for inferring missing entity type instances. Evaluation Protocol. We focus on entity type prediction determined by Formula (3) and (4). We use ranking criteria for evaluation. Firstly for each test pair, we remove the type and replace it by each of the types in T in turn. The function value of the negative pairs would be computed by the related models and then sorted by ascending order. We can obtain the exact rank of the correct type in the candidates. Finally, we use two metrics for comparison: (1) the mean reciprocal rank (MRR), and (2) the proportion of correct entities ranked in the top 1/3/10 (HITS@1/3/10)(%). Since the evaluation setting of “Raw” is not as accurate as “Filter” (Bordes et al., 2013), we only report the experimental results with latter setting in this paper. MRR = 1 |C| |C| X i=1 1 ranki , where C is a set of test pairs, and ranki is the rank position of the true entity type for the i-th pair. Implementation. The results of entity type prediction are shown in Table 3, where the results for the baselines are directly taken from original literature (Moon et al., 2017). We do not choose LM and PEM (Neelakantan et al., 2015) as baselines since they do not utilize triple knowledge, thus it is not fair to compare with them. For training our model, we select the learning rate α ∈{0.1, 0.05, 0.001}, the margins γ1, γ2, γ3 ∈{0.5, 1, 2, 5, 10}, the embedding dimension pairs (κ, ℓ) ∈{(100, 50), (150, 75), (200, 100), (250, 125)}, and the weight λ ∈{0.5, 0.65, 0.85, 0.95}. We use negative sampling, and gradient descent with AdaGrad as our optimization approach to improve convergence performance. During the initialization process, each embedding vector of the entities, entity types and relationships is initialized with a random number following a uniform distribution − √ 6/(m + n), where n ∈{#Ent, #Type, #Rel} and m ∈{κ, ℓ}. During the whole training process, we normalize the entity embeddings after each epoch. We select the parameters based on MRR in valid dataset. The optimal configurations are: {α = 0.1, γ1 = γ2 = γ3 = 2, κ = 200, ℓ= 100, λ = 0.85} on FB15k/ET/TRT; {α = 0.1, γ1 = γ2 = γ3 = 1, κ = 250, ℓ= 125, λ = 0.85} on YAGO43k/ET/TRT. We run 800 epochs on both datasets, and the batch size is 4096. Experimental Results. We can see from Table 3 that our ConnectEs outperform all baselines for entity type prediction in terms of all metrics on FB15kET and YAGO43kET. It confirms the capability of ConnectEs in modeling with local typing and global triple knowledge and inferring missing entity type instances in KGs. The model ConnectE(E2T+TRT)(full) achieves the highest scores. Analysis. (1) In E2T, we utilize a mapping matrix M which compresses entity embeddings into type embedding space, considering that entity type embedding represents common information of all the entities which belong to this type. The type embedding should be in a sharing subspace of entity embeddings. The experimental results of E2T compared with the baselines demonstrate that this assumption would be quite reasonable. (2) In E2T+TRT, we build new type-relation-type data, and then connect them with entity type instances. This approach provides more direct useful information to (weakly) supervise entity type prediction. For example, given a fact that head entity Barack Obama belongs to type /people/person 6425 Table 3: Entity type prediction results. Evaluation of different models on FB15kET and YAGO43kET. DATASET FB15kET YAGO43kET METRICS MRR HITS@1 HITS@3 HITS@10 MRR HITS@1 HITS@3 HITS@10 RESCAL (Nickel et al., 2011) 0.19 9.71 19.58 37.58 0.08 4.24 8.31 15.31 RES.-ET (Moon et al., 2017) 0.24 12.17 27.92 50.72 0.09 4.32 9.62 19.40 HOLE (Nickel et al., 2016) 0.22 13.29 23.35 38.16 0.16 9.02 17.28 29.25 HOLE-ET (Moon et al., 2017) 0.42 29.40 48.04 66.73 0.18 10.28 20.13 34.90 TransE (Bordes et al., 2013) 0.45 31.51 51.45 73.93 0.21 12.63 23.24 38.93 TransE-ET (Moon et al., 2017) 0.46 33.56 52.96 71.16 0.18 9.19 19.41 35.58 ETE (Moon et al., 2017) 0.50 38.51 55.33 71.93 0.23 13.73 26.28 42.18 ConnectE-(E2T+0) 0.57 +- .00 45.54 +- .28 62.31 +- .29 78.12 +- .12 0.24 +- .01 13.54 +- .12 26.20 +- .18 44.51 +- .09 ConnectE-(E2T+TRT)(disc.) 0.59 +- .01 48.54 +- .71 63.66 +- .39 78.27 +- .16 0.27 +- .01 15.1 +- .15 29.14 +- .13 47.08 +- .09 ConnectE-(E2T+TRT)(full) 0.59 +- .00 49.55 +- .62 64.32 +- .37 79.92 +- .14 0.28 +- .01 16.01 +- .12 30.85 +- .13 47.92 +- .07 and the relationship born in, we could make the best guess of the type of tail entity Honolulu as /location/location. Hence, the addition of type triples in ConnectE-(E2T+TRT) provides superior performance than ConnectE-(E2T+0). (3) Concerning about the scalability of our approach for big KGs, we utilize FB15kTRT(disc.) and YAGO43kTRT(disc.) for prediction, the training time of which reduced by 90% as the training data size decreased by 90%. Moreover, the results of ConnectE-(E2T+TRT)(disc.) show that it’s comparable with the best ConnectE-(E2T+TRT)(full). 4.3 Entity Type Classification This task aims to judge whether each entity type instance in testing data holds or not, which could be viewed as a binary classification problem. Evaluation Protocol. Since there are no explicit negative entity type instances in existing KGs, in order to create datasets for classification, we build negative facts by randomly switching type from entity type pairs in validation and testing set with equal number of positive and negative examples. Inspired by the evaluation metric of triple classification in (Socher et al., 2013), we calculate the scores of all entity type instances based on model energy function, and rank all instances in testing set with these scores. Those instances with lower scores are considered to be true. We use precision/recall curves to show the performances of all models. Moreover, we also compare the accuracy among different models. We first use validate set to find best threshold η. For instance, if the model score Se2t+trt(e, te) ≤η in classification, the entity type instance will be classified to be positive, otherwise to be negative. The final accuracy is based on how many facts are classified correctly. Implementation. We utilize the source codes and parameter settings of several baselines provided by (Moon et al., 2017) for this task. The optimal parameter settings for our proposed models are: {α = 0.1, γ1 = γ2 = γ3 = 2, κ = 200, ℓ= 100, λ = 0.85} on FB15kET; {α = 0.1, γ1 = γ2 = γ3 = 1, κ = 250, ℓ= 125, λ = 0.85} on YAGO43kET. In both datasets, we learn all the training data for 800 epochs and the batch size is 4096. After training, we firstly draw PR-curves with dynamic thresholds. We select the best threshold based on the accuracy in valid dataset, which is used to calculate the accuracy in test dataset. Experimental Results. We draw the PR-curves for type classification task on both datasets in Fig.3. Note that we only report the results of ConnectE(E2T+TRT)(disc.) not ConnectE-(E2T+TRT)(full), since the learning speed of the former is much more faster than the latter and its results are close to the best results of the latter. We can see from Fig.3 that when the recall rate is between 0.88 ∼ 0.97, ConnectE-(E2T+TRT)(disc.) model could achieve the highest precision rate on FB15kET. In other ranges, our ConnectE-(E2T+TRT)(disc.) model also shows comparable performance. The result is consistent on YAGO43kET. Specifically, ConnectE-(E2T+TRT)(disc.) achieves the best F1 score of 94.66% when recall = 94.27% and precision = 95.05% on FB15kET. Also, ConnectE(E2T+TRT)(disc.) surpasses other models and gets F1 score of 92.13% when precision = 93.18% and recall = 91.11% on YAGO43kET. It confirms the capability of our model, for they could not only infer missing types in KGs, but also perform well in KG entity type classification. Table 4 demonstrates the evaluation accuracy results of entity type classification, from which we can observe that: (1) On FB15kET, ConnectE(E2T+TRT)(disc.) achieves the best accuracy score (94.49%). Compared to the mostly related model ETE, our model shows 0.48% absolute performance improvement. On YAGO43kET, ConnectE(E2T+TRT)(disc.) model outperforms other models as well. The improvement of our model com6426 Figure 3: Entity type classification results (Precision/Recall Curve). Evaluate on FB15kET, YAGO43kET. pared to ETE is almost 1.51%. (2) Comparing to the improvement on YAGO43kET, the advantage ConnectE-(E2T+TRT)(disc.) has over ConnectE(E2T+0) in this task on FB15kET seems to be insignificant, which indicates that the type triples in FB15kTRT have fewer contribution on entity type classification than ones in YAGO43kTRT. It may be partially caused by the fact that the number of relations in YAGO43k (#Rel=37) is far less than that in FB15k (#Rel=1,345), which could considerably influence the effectiveness of the type-relationtype training set. Due to the rareness of relationships in YAGO43k, each entity usually connects with a large number of other entities through one single relationships, which means that the magnitude of |P| and |Q| in the composite model scoring function are large. After averaging in ConnectE(E2T+TRT)(disc.), it could achieve more stable and significant results on YAGO43kET. Table 4: Entity type classification results (accuracy). Dataset FB15kET YAGO43kET RESCAL-ET 90.02% 82.28% HOLE-ET 93.23% 90.14% TransE-ET 93.88% 90.76% ETE 94.01% 90.82% ConnectE (E2T+0) 94.45% 91.78% ConnectE (E2T+TRT)(disc.) 94.49% 92.33% 4.4 Case Study Table 5 shows the examples of entity type prediction by our model from FB15k/ET/TRT, which demonstrate our motivation of Mech. 2 that head type and tail type really maintain the relationship between head entity and tail entity. Given entity Peter Berg, TRT can find HITS@1 type prediction /people/person for it via the existing entity type assertion (New Youk, /location/location) and the relationship (/loc./loc./people born here) between them, i.e. ⃗ Peter Berg − ⃗ New York + ⃗ /location/location= ⃗ /people/person. Table 5: Entity type prediction examples. Extraction from FB15k/ET/TRT. Type prediction: HIT@1 Rel Tail type 1 type=? /people/person /location/location/ people born here /location/location head entity Peter Berg New York tail entity Gus Van Sant Louisville 2 type=? /americancomedy/movie /film/film/ directed by /film/director head entity Very Bad Things Peter Berg tail entity Rush Hour Brett Ratner 3 type=? /medicine/disease people/cause of death/people /people/person head entity Myocardial infarction Dick Clark tail entity Pancreatic cancer John Hurt 5 Conclusion and Future Work In this paper, we described a framework for leveraging global triple knowledge to improve KG entity typing by training not only on (entity, entity type) assertions but also using newly generated (head type, relationship, tail type) type triples. Specifically, we propose two novel embedding-based models to encode entity type instances and entity type triples respectively. The connection of both models is utilized to infer missing entity type instances. The empirical experiments demonstrate the effectiveness of our proposed model. Our modeling method is general and should apply to other typeoriented tasks. Next, we are considering to use this framework to conduct KG entity type noise detection. 6427 Acknowledgments The authors would like to thank all anonymous reviewers for their insightful comments. We also want to thank Zhiyuan Liu (Tsinghua University) and Linmei Hu (BUPT) for their useful suggestions and comments on early drafts. This work was supported by the National Natural Science Foundation of China under Grant No.61922085, 61906159, the Sichuan Science and Technology Program under Grant No.2018JY0607, the Fundamental Research Funds for the Central Universities under Grant No.JBK2003008, Fintech Innovation Center, and Financial Intelligence and Financial Engineering Key Laboratory of Sichuan Province. References Abhishek Abhishek, Ashish Anand, and Amit Awekar. 2017. Fine-grained entity type classification by jointly learning representations and label embeddings. In Proceedings of EACL, page 797807. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of EMNLP, pages 1533–1544. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of SIGMOD, pages 1247–1250. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Proceedings of NeurIPS, pages 2787–2795. Eunsol Choi, Omer Levy, Yejin Choi, and Luke S. Zettlemoyer. 2018. Ultra-fine entity typing. In Proceddings of ACL. Tim Dettmers, Minervini Pasquale, Stenetorp Pontus, and Sebastian Riedel. 2017. Convolutional 2d knowledge graph embeddings. In Proceedings of AAAI, pages 1811–1818. Boyang Ding, Quan Wang, Bin Wang, and Li Guo. 2018. Improving knowledge graph embedding using simple constraints. In Proceedings of ACL, pages 110–121, Melbourne, Australia. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of SIGKDD, pages 601– 610. Hady Elsahar, Christophe Gravier, and Frederique Laforest. 2018. Zero-shot question generation from knowledge graphs for unseen predicates and entity types. In Proceedings of NAACL, pages 218–228. Nitish Gupta, Sameer Singh, and Dan Roth. 2017. Entity linking via joint encoding of types, descriptions, and context. In Proceedings of EMNLP, pages 2681–2690. Hannaneh Hajishirzi, Leila Zilles, Daniel S. Weld, and Luke Zettlemoyer. 2013. Joint coreference resolution and named-entity linking with multi-pass sieves. In Proceddings of EMNLP, pages 289–299. Junheng Hao, Muhao Chen, Wenchao Yu, Yizhou Sun, , and Wei Wang. 2019. Universal representation learning of knowledge bases by jointly embedding instances and ontological concepts. In Proceedings of KDD 2019. Prachi Jain, Pankaj Kumar, and Soumen Chakrabarti. 2018. Type-sensitive knowledge base inference without explicit type supervision. In Proceedings of ACL. Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. 2016. Knowledge graph completion with adaptive sparse transfer matrix. In Proceddings of AAAI. Hailong Jin, Lei Hou, Juanzi Li, and Tiansi Dong. 2018. Attributed and predictive entity embedding for finegrained entity typing in knowledge bases. In Proceedings of COLING 2018, pages 282–292. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In arXiv preprint arXiv:1412.6980. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of AAAI, pages 2181–2187. Xin Lv, Lei Hou, Juanzi Li, and Zhiyuan Liu. 2018. Differentiating concepts and instances for knowledge graph embedding. In Proceedings of EMNLP 2018, page 19711979. Chang Moon, Paul Jones, and Nagiza F. Samatova. 2017. Learning entity type embeddings for knowledge graph completion. In Proceedings of CIKM, pages 2215–2218. Deepak Nathani, Jatin Chauhan, Charu Sharma, and Manohar Kaul. 2019. Learning attention-based embeddings for relation prediction in knowledge graphs. In Proceedings of ACL. Arvind Neelakantan, Ming-Wei Chang, and . 2015. Inferring missing entity type instances for knowledge base completion: New dataset and methods. In Proceedings of NAACL 2012, page 515525. 6428 Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. In Proceedings of AAAI, pages 1955– 1961. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of ICML, pages 809–816. Patrick Pantel, Thomas Lin, and Michael Gamon. 2012. Mining entity types from query logs via user intent modeling. In Proceedings of ACL, pages 563–571. Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Proceedings of NeurIPS, pages 926–934. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of WWW, pages 697–706. Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. TKDE, 29(12):2724– 2743. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of AAAI, pages 1112–1119. Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2016. Representation learning of knowledge graphs with hierarchical types. In Proceedings of IJCAI, pages 2965–2971. Peng Xu and Denilson Barbosa. 2018. Neural fine grained entity type classification with hierarchy aware loss. In Proceedings of NAACL. Yadollah Yaghoobzadeh, Heike Adel, and Hinrich Schtze. 2018. Corpus-level fine-grained entity typing. Journal of Artificial Intelligence Research, 61:835–862. Limin Yao, Sebastian Riedel, , and Andrew McCallum. 2013. Universal schema for entity type prediction. In Proceedings of the 2013 Workshop on Automated Knowledge Base Construction, pages 79–84. Dani Yogatama, Daniel Gillick, and Nevena Lazic. 2015. Embedding methods for fine grained entity type classification. In Proceedings of ACL, page 291296. Zheng Yuan and Doug Downey. 2018. Otyper: A neural architecture for open named entity typing. In Proceddings of AAAI. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING, pages 23–29. Richong Zhang, Fanshuang Kong, Chenyue Wang, and Yongyi Mao. 2018. Embedding of hierarchically typed knowledge bases. In Proceddings of AAAI. Ben Zhou, Daniel Khashabi, Chen-Tse Tsai, and Dan Roth. 2018. Zero-shot open entity typing as typecompatible grounding. In Procedddings of EMNLP, pages 2065–2076.
2020
572
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6429–6440 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6429 Continual Relation Learning via Episodic Memory Activation and Reconsolidation Xu Han1∗, Yi Dai1∗, Tianyu Gao1, Yankai Lin2, Zhiyuan Liu1† , Peng Li2, Maosong Sun1, Jie Zhou2 1 State Key Lab on Intelligent Technology and Systems, Institute for Artificial Intelligence, Department of Computer Science and Technology, Tsinghua University, Beijing, China 2Pattern Recognition Center, WeChat AI, Tencent Inc., China {hanxu17,daiy17,gty16}@mails.tsinghua.edu.cn {liuzy,sms}@mail.tsinghua.edu.cn {yankailin,patrickpli,withtomzhou}@tencent.com Abstract Continual relation learning aims to continually train a model on new data to learn incessantly emerging novel relations while avoiding catastrophically forgetting old relations. Some pioneering work has proved that storing a handful of historical relation examples in episodic memory and replaying them in subsequent training is an effective solution for such a challenging problem. However, these memorybased methods usually suffer from overfitting the few memorized examples of old relations, which may gradually cause inevitable confusion among existing relations. Inspired by the mechanism in human long-term memory formation, we introduce episodic memory activation and reconsolidation (EMAR) to continual relation learning. Every time neural models are activated to learn both new and memorized data, EMAR utilizes relation prototypes for memory reconsolidation exercise to keep a stable understanding of old relations. The experimental results show that EMAR could get rid of catastrophically forgetting old relations and outperform the state-of-the-art continual learning models. The code and datasets are released on https://github.com/thunlp/ ContinualRE. 1 Introduction Relation extraction aims at detecting relations between entities from text, e.g., extracting the relation “the president of” from the given sentence “Newton served as the president of the Royal Society”, which could serve as external resource for various downstream applications (Dong et al., 2015; Xiong et al., 2017; Schlichtkrull et al., ∗indicates equal contribution † Corresponding author 2018). The conventional RE methods (Riedel et al., 2013; Zeng et al., 2014; Lin et al., 2016) mostly focus on recognizing relations for a fixed pre-defined relation set, and cannot handle rapidly emerging novel relations in the real world. Some researchers therefore explore to detect and learn incessantly emerging relations in an open scenario. As shown in Figure 1, their efforts can be formulated into a two-step pipeline: (1) Open Relation Learning extracts phrases and arguments to construct patterns of specific relations, and then discovers unseen relation types by clustering patterns, and finally expands sufficient examples of new relation types from large-scale textual corpora; (2) Continual Relation Learning continually uses those expanded examples of new relations to train an effective classifier. The classifier is trained on a sequence of tasks for handling both existing and novel relations, where each task has its own relation set. Although continual relation learning is vital for learning emerging relations, there are rare explorations for this field. A straightforward solution is to store all historical data and re-train models every time new relations and examples come in. Nevertheless, it is computationally expensive since relations are in sustainable growth. Moreover, the huge example number of each relation makes frequently mixing new and old examples become infeasible in the real world. Therefore, storing all data is not practical in continual relation learning. In view of this, the recent preliminary work (Wang et al., 2019) indicates that the main challenge of continual relation learning is the catastrophic forgetting problem, i.e., it is hard to learn new relations and meanwhile avoid forgetting old relations, considering memorizing all the data is almost impossible. 6430 Date of Birth Member of Bands Data for Member of Bands Learn Date of Birth Learn Member of Bands Data for Date of Birth Historical Memory Data Historical Memory Data David Bowie was born in 8th Jan. 1947. Elton John’s birthday is 25th Mar. 1947. … Freddie was the lead vocalist of the rock band Queue. John Lennon was the rhythm guitarist of the Beatles. … Open Relation Learning Continual Relation Learning …… Data for …… Learn …… … … Learn New Relations Detect New Relations Figure 1: The whole pipeline to detect and learn new relations in an open scenario. Recent work (Shin et al., 2017; Kemker and Kanan, 2018; Chaudhry et al., 2019) has shown that the memory-based approaches, maintaining episodic memory to save a few training examples in old tasks and re-training memorized examples during training new tasks, are one of the most effective solutions to the catastrophic forgetting problem, especially for continual learning in NLP scenarios (Wang et al., 2019; d’Autume et al., 2019). However, existing memory-based models still suffer from an overfitting problem: when adapting them for continual relation learning, they may frequently change feature distribution of old relations, gradually overfit a few examples in memory, and finally become confused among old relations after long-term training. In fact, these memory-based methods are similar to long-term memory model of mammalian memory in neuroscience (McClelland et al., 1995; Bontempi et al., 1999). Although researchers in neuroscience are not clear about secrets inside the human brain, they reach a consensus that the formation of long-term memory relies on continually replaying and consolidating information (Tononi and Cirelli, 2006; Boyce et al., 2016; Yang et al., 2014), corresponding to the episodic memory and memory replay in continual learning models. Yet later work (Nader et al., 2000; Lee et al., 2004; Alberini, 2005) in neuroscience indicates that reactivation of consolidated memory triggers a reconsolidation stage to continually maintain memory, and memory is easy to be changed or erased in this stage. To apply some reconsolidation exercises can help memory go through this stage and keep long-term memory stable. Intuitively, the existing memory-based models seem like continual memory activation without reconsolidation exercises, and thus become sensitive and volatile. Inspired by the reconsolidation mechanism in human long-term memory formation, we introduce episodic memory activation and reconsolidation (EMAR) to continual relation learning in this paper. More specifically, when training models on new relations and their examples, we first adopt memory replay to activate neural models on examples of both new relations and memory, and then utilize a special reconsolidation module to let models avoid excessively changing and erasing feature distribution of old relations. As the core of relation learning is to grasp relation prototypes rather than rote memorization of relation examples, our reconsolidation module requires models to be able to distinguish old relation prototypes after each time memory is replayed and activated. As compared with pioneering explorations to improve episodic memory replay (Chaudhry et al., 2019; Wang et al., 2019), with toughly keeping feature distribution of old relations invariant, EMAR is more flexible in feature spaces and powerful in remembering relation prototypes. We conduct sufficient experiments on several RE datasets, and the results show that EMAR effectively alleviates the catastrophic forgetting problem and significantly outperforms the stateof-the-art continual learning models. Further experiments and analyses indicate the reasons for the effectiveness of EMAR, proving that it can utilize a few examples in old tasks to reconsolidate old relation prototypes and keep better distinction among old relations after long-term training. 6431 2 Related Work The conventional RE work, including both supervised RE models (Zelenko et al., 2003; Zhou et al., 2005; Gormley et al., 2015; Socher et al., 2012; Liu et al., 2013; Zeng et al., 2014; Nguyen and Grishman, 2015; dos Santos et al., 2015; Xu et al., 2015; Liu et al., 2015; Miwa and Bansal, 2016) and distantly supervised models (Bunescu and Mooney, 2007; Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Zeng et al., 2015; Lin et al., 2016; Han et al., 2018a; Baldini Soares et al., 2019), focuses on extracting predefined relations from text. Yet in the real world, new relations are rapidly emerging, and it is impossible to train models with a fixed dataset once to cover all relations. Hence, some researchers pay their attention to relation learning in various open scenarios, in order to detect and learn relations without pre-defined relation sets. As we introduced before, learning incessantly emerging relations consists of two important steps: open relation learning and continual relation learning. There have been many efforts for open relation learning, including pattern extraction (Banko et al., 2007; Fader et al., 2011; Mausam et al., 2012; Del Corro and Gemulla, 2013; Angeli et al., 2015; Petroni et al., 2015; Stanovsky and Dagan, 2016; Mausam, 2016; Cui et al., 2018), relation discovery (Yao et al., 2011; Marcheggiani and Titov, 2016), relation clustering (Shinyama and Sekine, 2006; Elsahar et al., 2017; Wu et al., 2019), and data collection (Riloff et al., 1999; Etzioni et al., 2005; Pantel and Pennacchiotti, 2006; Rozenfeld and Feldman, 2008; Nakashole et al., 2011; Zhu et al., 2009; Gao et al., 2020). However, for continual relation learning, there are still only some preliminary explorations for it. Following continual learning setting1 (Ring, 1994; Thrun and Pratt, 2012) in machine learning, Wang et al. (2019) first explore continual relation learning. Existing continual learning methods focus on three research directions: (1) consolidation-based methods (Kirkpatrick et al., 2017; Zenke et al., 2017; Li and Hoiem, 2017; Liu et al., 2018; Ritter et al., 2018) which consolidate the model parameters important to previous tasks and reduce their learning weights; (2) dynamic architecture methods (Chen et al., 2016; Rusu et al., 2016; Fernando et al., 2017) which dynamically expand model architectures to learn new tasks and ef1Some work names it lifelong or incremental learning. fectively prevent forgetting old tasks. Yet model size growing dramatically with increasing tasks makes these methods unsuitable for NLP applications; (3) memory-based methods (Lopez-Paz and Ranzato, 2017; Rebuffiet al., 2017; Shin et al., 2017; Kemker and Kanan, 2018; Aljundi et al., 2018; Chaudhry et al., 2019) remember a few examples in old tasks and continually learn them with emerging new tasks to alleviate catastrophic forgetting. Among these methods, the memorybased methods have been proven to be the most promising for NLP tasks, including both relation learning (Wang et al., 2019) and other NLP tasks (d’Autume et al., 2019; Sun et al., 2019). Inspired by reconsolidation in human memory formation, we introduce episodic memory activation and reconsolidation (EMAR) to alleviate the overfitting problem of the existing memory-based methods and better learn relations continually. 3 Methodology 3.1 Task Definition and Overall Framework Continual relation learning trains models on a sequence of tasks, where the k-th task has its own training set Tk, validation set Vk, and query set Qk. Each set of the k-th task, e.g. Tk = {(xTk 1 , yTk 1 ), . . . , (xTk N , yTk N )}, consists of a series of examples and their corresponding relation labels, where N is the example number of Tk. Each example xTk i and its label yTk i indicate that xTk i can express the relation yTk i ∈Rk, where Rk is the relation set of the k-th task. More specifically, models will be trained on Tk at the k-th step to learn the new relations in Rk. As relations are emerging and accumulating, continual relation learning requires models to perform well on both the k-th task and previous k−1 tasks. Hence, after training on Tk, models will be evaluated on ˜Qk = Sk i=1 Qi, and required to classify each query example into the all known relation set ˜Rk = Sk i=1 Ri. Therefore, the evaluation will be more and more difficult with the growth of tasks. For handling the catastrophic forgetting in continual relation learning, an episodic memory module M = {M1, M2, . . .} is set to store a few examples of historical tasks, each memory module Mk = {(xMk 1 , yMk 1 ), . . . , (xMk B , yMk B )} stores several examples and labels that come from Tk, where (xMk i , yMk i ) ∈Tk and B is the constrained memory size for each task. As shown in Figure 2, when models are trained 6432 Data for Relation C Data in Memory Data for Activation Prototype Set Instance Set Select Combine Sample E L E P E L E L Learning Computing Prototypes Replay & Activation Reconsolidation Learn Relation A Learn Relation B Learn Relation C Learn Relation D P Prototypes E Encoder L Loss Figure 2: A simple example of continually learning four tasks (each task has only one relation: A, B, C, D respectively) to demonstrate the overall framework of episodic memory activation and reconsolidation during continual relation learning. The purple solid lines and dotted lines represent the forward and backward propagation respectively. The black dotted lines represent the data flow. on the k-th task, our framework includes several steps to learn new relations and meanwhile avoid forgetting old relations: (1) First (Section 3.3), we fine-tune the example encoder on the training set Tk of the k-th task to let the model be aware of new relation patterns. (2) Second (Section 3.4), for each relation in the k-th relation set Rk, we select its informative examples and store the examples into the episodic memory Mk. (3) Finally (Section 3.5), we iteratively adopt memory replay and activation as well as memory reconsolidation to learn new relation prototypes while strengthening distinguishing old relation prototypes. Besides, we will introduce how to train models as well as predict relations for query examples in Section 3.6. As the example encoder is used in all other steps, we first introduce it in Section 3.2 before other steps. 3.2 Example Encoder Given an example x, we adopt an example encoder to encode its semantic features for detecting and learning relations. To be specific, we first tokenize the given example into several tokens, and then input the tokenized tokens into neural networks to compute its corresponding embedding. As extracting relations from sentences is related to those entities mentioned in sentences, we thus add special tokens into the tokenized tokens to indicate the beginning and ending positions of those entities. For simplicity, we denote such an example encoding operation as the following equation, x = f(x), (1) where x ∈Rd is the semantic embedding of x, and d is the embedding dimension. Note that the encoder is not our focus in this paper, we select bidirectional long short-term memory (BiLSTM) (Bengio et al., 1994) as representative encoders to encode examples. In fact, other neural text encoders like convolutional neural networks (Zeng et al., 2014) and pre-trained language models (Devlin et al., 2019) can also be adopted as example encoders. 3.3 Learning for New Tasks When the k-th task is arising, the example encoder has not touched any examples of new relations before, and cannot extract the semantic features of them. Hence, we first fine-tune the example encoder on Tk = {(xTk 1 , yTk 1 ), . . . , (xTk N , yTk N )} to grasp new relation patterns in Rk. The loss function of learning the k-th task is as follows, L(θ) = − N X i=1 | ˜ Rk| X j=1 δy Tk i =rj× log exp(g(f(xTk i ), rj)) P| ˜ Rk| l=1 exp(g(f(xTk i ), rl)) , (2) where rj is the embedding of the j-th relation rj ∈˜Rk in the all known relation set ˜Rk, g(·, ·) is the function to compute similarities between embeddings (e.g. cosine similarity), and θ is the parameters that can be optimized, including the example encoder parameters and relation embeddings. If yTk i equals rj, δy Tk i =rj = 1, otherwise 6433 δy Tk i =rj = 0. For each new relation, we first randomly initialize its embedding and then optimize Eq. (2). 3.4 Selecting Examples for Memory After several epochs of learning for new tasks with Eq. (2), we store a few examples from Tk into the memory Mk. More specifically, we select informative and diverse examples from Tk to cover new relation patterns as much as possible, which can make the memory effectively approximate the feature distribution of relations. After encoding all examples of the k-th task Tk into {xTk 1 , . . . , xTk N }, we apply K-Means to cluster these example embeddings, where the number of clusters is the memory size B. Then, for each cluster, we select the example closest to the cluster centroid and record which relation these selected examples belong to. We denote this selected example set Ck. By counting the example number in Ck for each relation, we can describe the relation importance in this task: more selected examples of a relation indicates more importance. As the limited memory size, for those more important relations, we select at least ⌊B |Rk|⌋examples, yet for those less important ones, we select at most ⌈B |Rk|⌉ examples. If a relation does not have enough examples to fill its allocated memory, this memory will be re-allocated for other relations. For each relation, we also use K-Means to cluster its own examples, and the number of current clusters is its allocated example number in the memory. For each cluster, we select the example closest to the cluster centroid, and store this example into the memory Mk. 3.5 Replay, Activation and Reconsolidation After fine-tuning the example encoder for Tk and selecting informative examples for Mk, we iteratively adopt computing prototypes, memory replay and activation, and memory reconsolidation to strengthen identifying new relation patterns and keep distinguishing old relation patterns. Computing Prototypes By combining all examples in the episodic memory, we achieve the whole memory set ˜ Mk = Sk i=1 Mi. As we aim to grasp relation prototypes rather than rote memorization of relation examples, for each known relation ri ∈˜Rk, we sample a prototype set Pi = {xPi 1 , . . . , xPi |Pi|}, where each example xPi i comes from ˜ Mk and its label equals ri, and compute its prototype embedding, pi = P|Pi| j=1 f(xPi j ) |Pi| , (3) where pi is the relation prototype embedding of ri ∈˜Rk. Memory Replay and Activation In memory replay and activation, the whole memory set ˜ Mk and the k-th training set Tk will be combined into an activation set Ak = ˜ Mk ∪Tk = {(xAk 1 , yAk 1 ), . . . , (xAk M , yAk M )} to continually activate models to learn new relations and remember old relations, where M is the total example number of both ˜ Mk and Tk. The loss function is LA(θ) = − M X i=1 | ˜ Rk| X j=1 δy Ak i =rj× log exp(g(f(xAk i ), rj)) P| ˜ Rk| l=1 exp(g(f(xAk i ), rl)) . (4) Memory Reconsolidation As we mentioned before, just conducting memory replay and activation will lead to the overfitting problem, and in the end, models only remember a handful of memorized examples after long-term training. Meanwhile, the core of learning relations is to grasp relation prototypes rather than rote memorization of relation examples. Hence, every time conducting memory replay and activation to grasp both new and old relations, we adopt a memory reconsolidation module to strengthen this process, which seems like conducting reconsolidation exercises to keep long-term memory stable in the human brain. For each known relation ri ∈˜Rk, we sample its instance set Ii = {xIi 1 , . . . , xIi |Ii|} as is similar to sampling Pi, where each example xIi i ∈Ii also comes from ˜ Mk and its label equals ri. The loss function of the memory reconsolidation is LR(θ) = − | ˜ Rk| X i=1 |Ii| X j=1 log exp(g(f(xIi j ), pi)) P| ˜ Rk| l=1 exp(g(f(xIi j ), pl)) , (5) where pl is the relation prototype embedding of rl ∈˜Rk computed by Eq. (3). 6434 Algorithm 1 Train EMAR for the k-th task Require: The training set Tk of the k-th task Require: The emerging relation set Rk of the k-th task Require: The memory module ˜ Mk−1 before learning Tk Require: The known relation set ˜Rk−1 before learning Tk 1: Initialize the relation embeddings for Rk 2: ˜Rk ←˜Rk−1 ∪Rk 3: for i ←1 to epoch1 do 4: Update θ with ∇L on Tk 5: end for 6: Select informative examples from Tk to store into Mk 7: ˜ Mk ←˜ Mk−1 ∪Mk 8: Ak ←˜ Mk ∪Tk 9: for i ←1 to epoch2 do 10: for relation rj ∈˜Rk do 11: Sample Pj from ˜ Mk and compute its relation prototype embedding pj 12: end for 13: for j ←1 to iter1 do 14: Update θ with ∇LA on Ak 15: end for 16: for j ←1 to iter2 do 17: Sample Ii from ˜ Mk for each known relation ri 18: Update θ with ∇LR on {I1, . . . , I| ˜ Rk|} 19: end for 20: end for 3.6 Training and Prediction For training the k-th task, we first use L(θ) to optimize parameters for several epochs. Then, we select examples for the memory, and iteratively optimize parameters with LA(θ) and LR(θ) until convergence. More details about the training process are shown in Algorithm 1. After finishing the k-th task, for each known relation ri ∈˜Rk, we collect all its memorized examples Ei = {xEi 1 , . . . , xEi S } in the whole memory ˜ Mk, where S is the example number of ri in the memory, and compute final relation prototype for prediction, ˜pi = ri + PS j=1 f(xEi j ) 1 + S , (6) where ri is the relation embedding of ri used in Eq. (2) and Eq. (4). For each query example x in ˜Qk, we define its score function for the relation ri: s(x, ri) = g(f(x), ˜pi), (7) where ˜pi is the final prototype of the relation ri computed by Eq. (6). Finally, the prediction y for the query x is calculated by: y = arg max ri∈˜ Rk s(x, ri). (8) FewRel SimpleQ TACRED W A W A W A Lower Bound 18.9 20.8 63.2 56.9 12.3 9.5 EWC 27.1 30.2 67.2 59.0 14.5 14.5 GEM 49.2 59.8 84.1 79.6 AGEM 36.1 42.5 77.6 72.2 15.7 16.0 EMR 51.0 62.0 85.2 80.8 28.7 35.6 EA-EMR 56.6 67.3 87.8 82.4 30.5 40.5 EMAR 66.0 77.9 85.2 83.7 44.5 54.4 Upper Bound 81.9 85.8 88.9 84.1 74.3 77.0 Table 1: Accuracy (%) of models on three benchmarks. “W” stands for the Whole performance, and “A” stands for the Average performance. The results of FewRel and SimpleQ come from Wang et al. (2019). The result of TACRED comes from our implemented models. 4 Experiments 4.1 Datasets We carry out our experiments on three benchmark datasets: (1) FewRel (Han et al., 2018b). FewRel is a RE dataset that contains 80 relations and 56, 000 examples in total. We follow the settings from Wang et al. (2019) to make FewRel a continual learning benchmark: FewRel is split into 10 clusters of relations, leading to 10 tasks and each relation just belongs to only one task. Each example in these tasks is related to a relation and a candidate set of 10 randomly selected relations for evaluation. (2) SimpleQuestions (Bordes et al., 2015). SimpleQuestions (SimpleQ) is a knowledge base question answering dataset that contains 108, 442 questions, and Yu et al. (2017) construct a relation detection dataset based on it, where questions are linked to relations. Like FewRel, we follow the settings from Wang et al. (2019): SimpleQ is split into 20 clusters of relations to construct 20 tasks. As each question in SimpleQ has been related to a candidate set for evaluation, we do not randomly sample candidate sets again for SimpleQ. (3) TACRED (Zhang et al., 2017). TACRED is a RE dataset that contains 42 relations and 21, 784 examples. Similar to FewRel, we also split TACRED into 10 clusters of relations to construct 10 tasks, and randomly sample candidate relation sets consisting of 10 relations for each examples. Considering there is a special relation “n/a” (not available) in TACRED, we filter out these examples with the relation “n/a” and use the left examples for continual TACRED. 6435 (a) FewRel (b) SimpleQuestions (c) TACRED Figure 3: Changes in accuracy (%) with increasing tasks through the continual learning process. FewRel SimpleQ TACRED 10 25 50 10 25 50 10 25 50 W A W A W A W A W A W A W A W A W A EWC 21.3 24.4 63.9 62.5 14.5 14.5 AGEM 29.0 34.0 33.8 39.0 41.2 47.5 69.1 66.1 72.2 69.2 76.2 73.1 14.7 14.5 15.0 15.5 15.7 16.0 EMR 42.0 54.1 49.0 60.5 53.6 65.1 81.5 77.4 84.9 81.0 86.9 82.9 21.8 26.5 25.7 31.6 28.7 35.6 EA-EMR 49.0 61.2 54.9 66.4 59.1 69.9 83.3 78.7 86.4 82.0 87.9 83.5 23.0 30.0 27.7 37.0 30.5 40.5 EMAR 53.8 69.1 62.5 74.9 66.0 77.9 80.9 78.7 84.6 81.4 85.2 83.7 31.0 36.3 37.8 48.5 44.5 54.4 Table 2: Accuracy (%) of models with different memory sizes. All the results come from our implemented models. 4.2 Experimental Settings We use two evaluation settings including whole performance, which calculates the accuracy on the whole test set of all tasks, and average performance, which averages the accuracy on all seen tasks. After having seen all tasks, we use the final whole performance and average performance to evaluate the overall performance of continual relation learning. As average performance highlights the performance of handling catastrophic problem, and thus it is the main metric to evaluate models. As the task sequence has influence on final model performance, we implement the baseline models by ourselves based on the toolkit2 released by Wang et al. (2019). For fair comparison, we unify the random seeds in our experiments completely consistent with the seeds in Wang et al. (2019), so that the task sequence can be completely consistent with Wang et al. (2019). For other settings, such as hidden embedding dimension and pre-trained input embeddings, we also follow the settings in Wang et al. (2019). 2https://github.com/hongwang600/ Lifelong_Relation_Detection 4.3 Baselines We evaluate our model and several baselines on the benchmarks, and select two theoretical models to measure the lower and upper bounds: (1) Lower Bound, which continually fine-tunes models for each new task without memorizing any historical examples; (2) Upper Bound, which remembers all examples in history and continually re-train models with all data. In fact, this model serves as the ideal upper bound for the performance of continual relation learning; (3) EWC (Kirkpatrick et al., 2017), which adopts elastic weight consolidation to add special L2 regularization on parameter changes. Then, EWC uses Fisher information to measure the parameter importance to old tasks, and slow down the update of those parameters important to old tasks; (4) EMR (Parisi et al., 2019), a basic memorybased method, which memorizes a few historical examples and simply conduct memory replay. Every time a new task comes in, EMR mixes memorized examples and new examples together to fine-tune models; (5) GEM (Lopez-Paz and Ranzato, 2017), an extension of EMR, which adds a constraint on directions of new gradients to make sure that optimization directions do not conflict with gradients on old tasks; (6) AGEM 6436 (a) EA-EMR (step-1) (b) EA-EMR (step-4) (c) EA-EMR (step-7) (d) EA-EMR (step-10) (e) EMAR (step-1) (f) EMAR (step-4) (g) EMAR (step-7) (h) EMAR (step-10) Figure 4: A visualization of features learnt by EA-EMR and EMAR at different training steps on FewRel. For each image, we use the support vector machine to acquire its best linear boundary and draw it as the blue line. Step-1 Step-4 Step-7 Step-10 EA-EMR 98.8 65.0 78.8 73.8 EMAR 92.5 75.0 87.5 80.0 Table 3: Classification accuracy (%) based on the features learnt by EA-EMR and EMAR in Figure 4. (Chaudhry et al., 2019), the extension of GEM, which takes the gradient on sampled memorized examples from memory as the only constraint on the optimization directions of the current task; (7) EA-EMR (Wang et al., 2019), which introduces memory replay and embedding aligned mechanism to enhance previous tasks and mitigate the embedding distortion when trained on new tasks. EA-EMR is also an extension of EMR, and the state-of-the-art on continual relation learning. 4.4 Overall Results Table 1 shows the overall performance on three benchmarks under two different settings. From the table, we can see that (1) our proposed EMAR significantly outperforms other baselines and achieves state-of-the-arts almost in all settings. On the SimpleQ dataset, the performance of EMAR is close to EA-EMR and EMR. The reason is perhaps that the SimpleQ benchmark is over simple (even the weakest Lower Bound achieves relatively high results close to Upper Bound). On other benchmarks, EMAR outperforms all the baseline models with a large margin, showing the superiority of our proposed episodic memory activation and reconsolidation mechanism. (2) There is still a huge gap between our model and the upper bound. It indicates there remains lots of things to be explored in continual relation learning. To further investigate how accuracy changes while learning new tasks, we show the average performance of models at each step in Figure 3. As shown in the figure, we can observe that: (1) With increasing numbers of tasks, the performance of all the models decreases in some degree. This indicates that catastrophically forgetting old relations is inevitable, and it is indeed one of the major difficulty for continual relation learning. (2) The memory-based methods significantly outperform the consolidation-based method, which demonstrates the memory-based methods could alleviate the problem of catastrophic forgetting to some extent. (3) Our proposed EMAR achieves a much better results compared to state-of-the-art model EA-EMR. It shows the effectiveness of our memory reconsolidation, and further indicates understanding relation prototypes is more important and reasonable than rote memorization of examples. 4.5 Effect of Memory Size Memory size indicates the number of remembered examples for each task. In this section, we investigate the effect of memory size for the performance of baselines and our proposed model. We compare three memory sizes: 10, 25 and 50. As ex6437 isting work does not report the results with different memory size, we re-implement baseline models by ourselves in this experiment. The results are shown in Table 2. We can find that: (1) With the increasing memory size, the performance of all models improves respectively, which shows that the memory size is one of the key factor determining the performance of continual relation learning models. (2) On both FewRel and TACRED, our EMAR keeps performing the best under different memory sizes, and even achieves comparable results with other models of larger memory sizes. It indicates adopting relation prototypes in EMAR is a more effective way to utilize memory compared with existing memory-based methods. 4.6 Effect of Prototypes and Reconsolidation To show the effectiveness of prototypes and reconsolidation, we give a case study demonstrating the changing of feature spaces learnt by EA-EMR and EMAR (ours). We sample two relations from the training set and 40 examples per relation from the test set. Then we train EA-EMR and EMAR with the sampled training data respectively and visualize the changes of the sampled 40 instances in the feature spaces at different steps. From Figure 4, we can see that EMAR learns better features of instances after multi-step training: the embedding space of EMAR is more sparse and features from two relations are more distinguishable. On the other hand, the features learnt by EA-EMR become more dense with increasing steps, thus harder to classify. This phenomenon is mainly due to the different approaches of constraining features used by EAEMR and EMAR. The L2 regularization used in EA-EMR for keeping the instance distribution of old relations leads to higher density in the feature space and smaller distances between different relations after several training steps. On the contrary, EMAR avoids models from forgetting previous relations by relation prototypes. Compared with EA-EMR, using prototypes for reconsolidation is a more flexible constraint, allowing EMAR to utilize larger feature spaces for representing examples and prototypes. To quantitatively analyze the case, we use the support vector machine to acquire linear boundaries for each image in Figure 4 and list the classification results in Table 3. The quantitative results in the table show that embeddings learnt by EMAR achieve better classification performance, which further supports our above observations. 5 Conclusion and Future Work To alleviate catastrophically forgetting old relations in continual relation learning, we introduce episodic memory activation and reconsolidation (EMAR), inspired by the mechanism in human long-term memory formation. Compared with existing memory-based methods, EMAR requires models to understand the prototypes of old relations rather than to overfit a few specific memorized examples, which can keep better distinction among relations after long-term training. We conduct experiments on three benchmarks in relation extraction and carry out extensive experimental results as well as empirical analyses, showing the effectiveness of EMAR on utilizing memorized examples. For future work, how to combine open relation learning and continual relation learning together to complete the pipeline for emerging relations still remains a problem, and we will continue to work on it. Acknowledgments This work is supported by the National Key Research and Development Program of China (No. 2018YFB1004503) and the National Natural Science Foundation of China (NSFC No. 61732008, 61772302). Tianyu Gao is supported by 2019 Tencent Rhino-Bird Elite Training Program and Tsinghua University Initiative Scientific Research Program. References Cristina M Alberini. 2005. Mechanisms of memory stabilization: are consolidation and reconsolidation similar or distinct processes? Trends in neurosciences, 28(1):51–56. Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. 2018. Memory aware synapses: Learning what (not) to forget. In Proceedings of ECCV, pages 139–154. Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proceedings of ACL-IJCNLP, pages 344–354. Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In Proceedings of ACL, pages 2895–2905. 6438 Michele Banko, Michael J Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proceedings of IJCAI, pages 2670–2676. Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2):157–166. Bruno Bontempi, Catherine Laurent-Demir, Claude Destrade, and Robert Jaffard. 1999. Timedependent reorganization of brain circuitry underlying long-term memory storage. Nature, 400(6745):671. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075. Richard Boyce, Stephen D Glasgow, Sylvain Williams, and Antoine Adamantidis. 2016. Causal evidence for the role of rem sleep theta rhythm in contextual memory consolidation. Science, 352(6287):812– 816. Razvan Bunescu and Raymond Mooney. 2007. Learning to extract relations from the web using minimal supervision. In Proceedings of ACL, pages 576– 583. Arslan Chaudhry, Marc’Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. 2019. Efficient lifelong learning with a-gem. In Proceedings of ICLR. Tianqi Chen, Ian Goodfellow, and Jonathon Shlens. 2016. Net2Net: Accelerating learning via knowledge transfer. In Proceedings of ICLR. Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. In Proceedings of ACL, pages 407–413. Cyprien de Masson d’Autume, Sebastian Ruder, Lingpeng Kong, and Dani Yogatama. 2019. Episodic memory in lifelong language learning. In Proceedings of NIPS. Luciano Del Corro and Rainer Gemulla. 2013. Clausie: Clause-based open information extraction. In Proceedings of WWW, pages 355–366. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171–4186. Li Dong, Furu Wei, Ming Zhou, and Ke Xu. 2015. Question answering over Freebase with multicolumn convolutional neural networks. In Proceedings of ACL-IJCNLP, pages 260–269. Hady Elsahar, Elena Demidova, Simon Gottschalk, Christophe Gravier, and Frederique Laforest. 2017. Unsupervised open relation extraction. In Proceedings of ESWC, pages 12–16. Oren Etzioni, Michael Cafarella, Doug Downey, AnaMaria Popescu, Tal Shaked, Stephen Soderland, Daniel S Weld, and Alexander Yates. 2005. Unsupervised named-entity extraction from the web: An experimental study. AI, 165:91–134. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of EMNLP, pages 1535– 1545. Chrisantha Fernando, Dylan Banarse, Charles Blundell, Yori Zwols, David Ha, Andrei A Rusu, Alexander Pritzel, and Daan Wierstra. 2017. PathNet: Evolution channels gradient descent in super neural networks. ai.google. Tianyu Gao, Xu Han, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, and Maosong Sun. 2020. Neural snowball for few-shot relation learning. In Proceedings of AAAI. Matthew R Gormley, Mo Yu, and Mark Dredze. 2015. Improved relation extraction with feature-rich compositional embedding models. In Proceedings of EMNLP, pages 1774–1784. Xu Han, Pengfei Yu, Zhiyuan Liu, Maosong Sun, and Peng Li. 2018a. Hierarchical relation extraction with coarse-to-fine grained attention. In Proceedings of EMNLP, pages 2236–2245. Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018b. FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Proceedings of EMNLP, pages 4803–4809. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In Proceedings of ACL, pages 541–550. Ronald Kemker and Christopher Kanan. 2018. Fearnet: Brain-inspired model for incremental learning. In Proceedings of ICLR. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of NAS, pages 3521–3526. Jonathan LC Lee, Barry J Everitt, and Kerrie L Thomas. 2004. Independent cellular processes for hippocampal memory consolidation and reconsolidation. Science, 304(5672):839–843. Zhizhong Li and Derek Hoiem. 2017. Learning without forgetting. TPAMI, 40(12):2935–2947. 6439 Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of ACL, pages 2124–2133. ChunYang Liu, WenBo Sun, WenHan Chao, and Wanxiang Che. 2013. Convolution neural network for relation extraction. In Proceedings of ICDM, pages 231–242. Xialei Liu, Marc Masana, Luis Herranz, Joost Van de Weijer, Antonio M Lopez, and Andrew D Bagdanov. 2018. Rotate your networks: Better weight consolidation and less catastrophic forgetting. In Proceedings of ICPR, pages 2262–2268. Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, and WANG Houfeng. 2015. A dependency-based neural network for relation classification. In Proceedings of ACL-IJCNLP, pages 285–290. David Lopez-Paz and Marc’Aurelio Ranzato. 2017. Gradient episodic memory for continual learning. In Proceedings of NIPS, pages 6467–6476. Diego Marcheggiani and Ivan Titov. 2016. Discretestate variational autoencoders for joint discovery and factorization of relations. TACL, 4:231–244. Mausam, Michael Schmitz, Stephen Soderland, Robert Bart, and Oren Etzioni. 2012. Open language learning for information extraction. In Proceedings of EMNLP-CoNLL, pages 523–534. Mausam Mausam. 2016. Open information extraction systems and downstream applications. In Proceedings of IJCAI, pages 4074–4077. James L McClelland, Bruce L McNaughton, and Randall C O’Reilly. 1995. Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory. Psychological review, 102(3):419. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of ACLIJCNLP, pages 1003–1011. Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using LSTMs on sequences and tree structures. In Proceedings of ACL, pages 1105– 1116. Karim Nader, Glenn E Schafe, and Joseph E Le Doux. 2000. Fear memories require protein synthesis in the amygdala for reconsolidation after retrieval. Nature, 406(6797):722. Ndapandula Nakashole, Martin Theobald, and Gerhard Weikum. 2011. Scalable knowledge harvesting with high precision and high recall. In Proceedings of WSDM, pages 227–236. Thien Huu Nguyen and Ralph Grishman. 2015. Relation extraction: Perspective from convolutional neural networks. In Proceedings of the NAACL Workshop on Vector Space Modeling for NLP, pages 39– 48. Patrick Pantel and Marco Pennacchiotti. 2006. Espresso: Leveraging generic patterns for automatically harvesting semantic relations. In Proceedings of COLING, pages 113–120. German I Parisi, Ronald Kemker, Jose L Part, Christopher Kanan, and Stefan Wermter. 2019. Continual lifelong learning with neural networks: A review. Neural Networks. Fabio Petroni, Luciano Del Corro, and Rainer Gemulla. 2015. CORE: Context-aware open relation extraction with factorization machines. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1763–1773. Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. 2017. iCaRL: Incremental classifier and representation learning. In Proceedings of CVPR, pages 2001–2010. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Proceedings of ECML-PKDD, pages 148–163. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of NAACL, pages 74–84. Ellen Riloff, Rosie Jones, et al. 1999. Learning dictionaries for information extraction by multi-level bootstrapping. In Proceedings of AAAI, pages 474– 479. Mark Bishop Ring. 1994. Continual learning in reinforcement environments. Ph.D. thesis, University of Texas at Austin Austin, Texas 78712. Hippolyt Ritter, Aleksandar Botev, and David Barber. 2018. Online structured laplace approximations for overcoming catastrophic forgetting. In Proceedings of NIPS, pages 3738–3748. Benjamin Rozenfeld and Ronen Feldman. 2008. Selfsupervised relation extraction from the web. Proceedings of KAIS, 17(1):17–33. Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. 2016. Progressive neural networks. ai.google. Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2015. Classifying relations by ranking with convolutional neural networks. In Proceedings of ACLIJCNLP, pages 626–634. 6440 Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In Proceedings of ESWC, pages 593–607. Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. 2017. Continual learning with deep generative replay. In Proceedings of NIPS, pages 2990–2999. Yusuke Shinyama and Satoshi Sekine. 2006. Preemptive information extraction using unrestricted relation discovery. In Proceedings of NAACL-HLT, pages 304–311. Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of EMNLP, pages 1201–1211. Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In Proceedings of EMNLP, pages 2300–2305. Fan-Keng Sun, Cheng-Hao Ho, and Hung-Yi Lee. 2019. LAMAL: Language modeling is all you need for lifelong language learning. arXiv preprint arXiv:1909.03329. Sebastian Thrun and Lorien Pratt. 2012. Learning to learn. Springer Science & Business Media. Giulio Tononi and Chiara Cirelli. 2006. Sleep function and synaptic homeostasis. Sleep medicine reviews, 10(1):49–62. Hong Wang, Wenhan Xiong, Mo Yu, Xiaoxiao Guo, Shiyu Chang, and William Yang Wang. 2019. Sentence embedding alignment for lifelong relation extraction. In Proceedings of NAACL-HLT, pages 796–806. Ruidong Wu, Yuan Yao, Xu Han, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, and Maosong Sun. 2019. Open relation extraction: Relational knowledge transfer from supervised data to unsupervised data. In Proceedings of EMNLP-IJCNLP, pages 219–228. Chenyan Xiong, Russell Power, and Jamie Callan. 2017. Explicit semantic ranking for academic search via knowledge graph embedding. In Proceedings of WWW, pages 1271–1279. Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying relations via long short term memory networks along shortest dependency paths. In Proceedings of EMNLP, pages 1785–1794. Guang Yang, Cora Sau Wan Lai, Joseph Cichon, Lei Ma, Wei Li, and Wen-Biao Gan. 2014. Sleep promotes branch-specific formation of dendritic spines after learning. Science, 344(6188):1173–1178. Limin Yao, Aria Haghighi, Sebastian Riedel, and Andrew McCallum. 2011. Structured relation discovery using generative models. In Proceedings of EMNLP, pages 1456–1466. Mo Yu, Wenpeng Yin, Kazi Saidul Hasan, Cicero dos Santos, Bing Xiang, and Bowen Zhou. 2017. Improved neural relation detection for knowledge base question answering. In Proceedings of ACL, pages 571–581. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. Proceedings of JMLR, pages 1083–1106. Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In Proceedings of EMNLP, pages 1753–1762. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING, pages 2335–2344. Friedemann Zenke, Ben Poole, and Surya Ganguli. 2017. Continual learning through synaptic intelligence. In Proceedings of ICML, pages 3987–3995. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D Manning. 2017. Positionaware attention and supervised data improve slot filling. In Proceedings of EMNLP, pages 35–45. GuoDong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In Proceedings of ACL, pages 427–434. Jun Zhu, Zaiqing Nie, Xiaojiang Liu, Bo Zhang, and Ji-Rong Wen. 2009. StatSnowball: A statistical approach to extracting entity relationships. In Proceedings of WWW, pages 101–110.
2020
573
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6441–6451 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6441 Handling Rare Entities for Neural Sequence Labeling Yangming Li♠,♥, Han Li♥, Kaisheng Yao♠and Xiaolong Li♠ ♠Ant Financial Services Group, Alibaba Group ♥Harbin Institute of Technology {pangmao.lym,kaisheng.yao,xl.li}@antfin.com Abstract One great challenge in neural sequence labeling is the data sparsity problem for rare entity words and phrases. Most of test set entities appear only few times and are even unseen in training corpus, yielding large number of out-of-vocabulary (OOV) and low-frequency (LF) entities during evaluation. In this work, we propose approaches to address this problem. For OOV entities, we introduce local context reconstruction to implicitly incorporate contextual information into their representations. For LF entities, we present delexicalized entity identification to explicitly extract their frequency-agnostic and entity-typespecific representations. Extensive experiments on multiple benchmark datasets show that our model has significantly outperformed all previous methods and achieved new startof-the-art results. Notably, our methods surpass the model fine-tuned on pre-trained language models without external resource. 1 Introduction In the context of natural language processing (NLP), the goal of sequence labeling is to assign a categorical label to each entity word or phrase in a text sequence. It is a fundamental area that underlies a range of applications including slot filling and named entity recognition. Traditional methods use statistical models. Recent approaches have been based on neural networks (Collobert et al., 2011; Mesnil et al., 2014; Ma and Hovy, 2016; Strubell et al., 2017; Li et al., 2018; Devlin et al., 2018; Liu et al., 2019a; Luo et al., 2020; Xin et al., 2018) and they have made great progresses in various sequence labeling tasks. However, a great challenge to neural-networkbased approaches is from the data sparsity problem (Augenstein et al., 2017). Specifically in the context of sequence labeling, the majority of entities Frequency Number Percentage = 0 (OOV) 1611 65.1% = 1 (Low) 191 7.7% < 10 (Low) 635 25.7% > 20 (High) 117 4.7% ≥0 (Total) 2475 100.0% Table 1: Number of occurrences of test set entities in the training set. OOV entities are those that have no occurrence (Frequency = 0) in the training set. Low frequency entities are those with fewer than ten occurrences (Frequency < 10). Percentages of entity occurrences are also shown. Data source is CoNLL-03. in test dataset may occur in training corpus few times or are absent at all. In this paper, we refer this phenomenon particularly to rare entity problem. It is different from other types of data sparsity problems such as the lack of training data for lowresource language (Lin et al., 2018), as this rare entity problem is more related to a mismatch of entity distributions between training and test, rather than the size of training data. We present an example of the problem in Table 1. It shows that less than 5% of test set entities are frequently observed in the training set, and about 65% of test set entities are absent from the training set. The rare entities can be categorized into two types: out-of-vocabulary (OOV) for those test set entities that are not observed in the training set, and low frequency (LF) for those entities with low frequency (e.g., fewer than 10) occurrences in the training set. Without proper processing, rare entities can incur the following risks when building a neural network. Firstly, OOV terms may act as noise for inference, as they lack lexical information from training set (Bazzi, 2002). Secondly, it is hard to obtain high-quality representations on LF entities (Gong et al., 2018). Lastly, high occurrences of OOV and LF entities expose distribution discrep6442 ancy between training and test, which mostly leads to poor performances during test. In general, there are two existing strategies attempting to mitigate the above issues: external resource and transfer learning. The external resource approach, for example (Huang et al., 2015; Li et al., 2018), uses external knowledge such as part-of-speech tags for NER or additional information from intent detection for slot filling. However, external knowledge such as part-of-speech tag is not always available for practical applications and open source taggers such as (Manning et al., 2014) may perform poorly for cross-domain annotations. Character or n-gram feature are mainly designed to deal with morphologically similar OOV words. The transfer learning approach, such as using ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018), fine-tunes pre-trained models on the downstream task (Liu et al., 2019a). Nevertheless, it is not directly addressing problems such as entity distribution discrepancy between training and test. Moreover, our proposed methods surpass these methods without resorting to external resources nor large pre-trained language models. This paper proposes novel techniques that enable sequence labeling models to achieve state-of-theart performances without using external resource nor transfer learning. These are • local context reconstruction (LCR), which is applied on OOV entities, and • delexicalized entity identification (DEI), which is applied on LF entities. Local context reconstruction enables OOV entities to be related to their contexts. One key point is applying variational autoencoder to model this reconstruction process that is typically a one-to-many generation process. Delexicalized entity identification aims at extracting frequency-agnostic and entity-type-specific representation, therefore reducing the reliance on high-frequency occurrence of entities1. It uses a novel adversarial training technique to achieve this goal. Both methods use an effective random entity masking strategy. We evaluate the methods on sequence labeling tasks on several benchmark datasets. Extensive experiments show that the proposed methods significantly outperform previous models by a large margin. Detailed analysis indicates that the proposed 1This paper refers slots in slot filling tasks as entities for brevity, although their definitions are not equivalent. methods indeed alleviate the rare entity problem. Notably, without using any external knowledge nor pre-trained models, the proposed methods surpass the model that uses fine-tuned BERT. 2 Background Given an input sequence X = [x1, x2, · · · , xN] with N tokens, the sequence labeling task aims at learning a functional mapping to obtain a target label sequence Y = [y1, y2, · · · , yN] with equal length. In the following, we briefly introduce a typical method for sequence labeling and review related techniques we use in deriving our model. 2.1 Bidirectional RNN + CRF Recurrent neural network (RNN) (Hochreiter and Schmidhuber, 1997) has been widely used for sequence labeling. The majority of high performance models use bidirectional RNN (Schuster and Paliwal, 1997) to encode input sequence X and conditional random field (CRF) (Lafferty et al., 2001) as a decoder to output Y . The bidirectional RNN firstly embeds observation xi at each position i to a continuous space xi. It then applies forward and backward operations on the whole sequence time-recursively as (−→ h i = −→f (xi, −→ h i−1) ←− h i = ←−f (xi, ←− h i+1) . (1) CRF computes the probability of a label sequence Y given X as      log p(Y |X) ∝ X i (gi[yi] + G[yi, yi+1]) gi = W ∗(−→ h i ⊕←− h i) , (2) where ⊕denotes concatenation operation. G and W are learnable matrices. The sequence with the maximum score is the output of the model, typically obtained using Viterbi algorithm. We use bidirectional RNN + CRF model, in particular, Bi-LSTM+CRF (Huang et al., 2015), as the baseline model in our framework and it is referred in the bottom part of Figure 1. 2.2 Variational Autoencoder The above model, together with other encoderdecoder models (Sutskever et al., 2014; Bahdanau et al., 2014), learn deterministic and discriminative functional mappings. The variational auto-encoder (VAE) (Kingma and Welling, 2015; Rezende et al., 6443 [SOS] list flights to indianapolis with fares on monday morning , please . [EOS] [SOS] O O O B-FROMLOC O O O B-DATE I-DATE O O O [EOS] Local Context Reconstruction list flights to 0 / 1 with fares on 0 / 1 , please . Neural Sequence Labeling (Bi-LSTM + CRF) Delexicalized Entity Identification Local Context Reconstruction Local Context Reconstruction Delexicalized Entity Identification Figure 1: Overall framework to use local context reconstruction and delexicalized entity identification for neural sequence labeling. “[SOS]” and “[EOS]” are used for marking sequence begining and sequence ending, respectively. The local context reconstruction is applied between any two sucessive entities, including the special entities. The delexicalized entity identitification is applied for all entities except for the special entities. 2014; Bowman et al., 2015), on the other hand, is stochastic and generative. Using VAE, we may assume a sequence x = [x1, x2, · · · , xN] is generated stochastically from a latent global variable z with a joint probability of p(x, z) = p(x|z) ∗p(z). (3) where p(z) is the prior probability of z, generally a simple Gaussian distribution, to keep the model from generating x deterministically. p(x|z) represents a generation density, usually modeled with a conditional language model with initial state of z. Maximum likelihood training of a model for Eq. (3) involves computationally intractable integration of z. To circumvent this, VAE uses variational inference with variational distribution of z coming from a Gaussian density q(z|x) = N(µ, diag(σ2)), with vector mean µ and diagonal matrix variance diag(σ2) parameterized by neural networks. VAE also uses reparameterization trick to obtain latent variable z as follows: z = µ + σ ⊙ϵ, (4) where ϵ is sampled from standard Gaussian distribution and ⊙denotes element-wise product. The evidence lower bound (ELBO) of the likelihood p(x) is obtained using Jensen’s inequality Eq(z|x) log p(x, z) ≤log p(x) as follows: Lvae(x) = −KL(q(z|x)||p(z)) −CE(q(z|x)|p(x|z)), (5) where KL(q||p) and CE(q|p) respectively denote the Kullback-Leibler divergence and the crossentropy between distribution q and p. ELBO can be optimized by alternating between optimizations of parameters of q(z|x) and p(x|z). We apply VAE for local context reconstruction from slot/entity tags in Figure 1. This is a generation process that is inherently one-to-many. We observe that VAE is superior to the deterministic model (Bahdanau et al., 2014) in learning representations of rare entities. 2.3 Adversarial Training Adversarial training (Goodfellow et al., 2014), originally proposed to improve robustness to noise in image, is later extended to NLP tasks such as text classification (Miyato et al., 2015, 2016) and learning word representation (Gong et al., 2018). We apply adversarial training to learn better representations of low frequency entities via delexicalized entity identification in Figure 1. It has a discriminator to differentiate representations from the original low-frequency entities and the representations of the delexicalized entities. Training aims at obtaining representations that can fool the discriminator, therefore achieving frequency-agnostics and entity-type-specificity. 3 The Model We illustrate the overall framework of the proposed model in Figure 1. Its baseline sequence labeling module is described in Section 2.1. We describe the details of local context reconstruction in Sec. 3.1 and delexicalized entity identification in Sec. 3.2, together with an example to illustrate them in Figure 2. We denote parameters in Sec. 2.1 as θrnn and θemb, respectively, for its RNN and matrix to obtain 6444 𝑁(0, 1) ℎ( on fares with [SOS] [EOS] on fares with ℎ) ℎ* ℎ+ KL divergence indianapolis with fares on monday Local Context Representation OOV masking [UNK] log 𝜎 𝜇 Posterior Gaussian MLP reparameterization trick minus feature (a) VAE for local context reconstruction. pooling on monday morning B-DATE morning delexicalization MLP Discriminator delexicalized material −∇#𝑓 adversarial training Cross Entropy +∇#𝑓 pooling clone binary classification (b) AT for delexicalized entity identification. Figure 2: An example to illustrate local context reconstruction and delexicalized entity identification. embedding. Parameters in Sec. 3.1 and Sec. 3.2 are each denoted as θlcr and θD. 3.1 Local Context Reconstruction Contrary to the conventional methods that explicitly provide abundant lexical features from external knowledge, we implicitly enrich word representations with contextual information by training them to reconstruct their local contexts. Masking Every entity word xi in X, which is defined to be not associated with non-entity label “O”, in sequence X is firstly randomly masked with OOV symbol “[UNK]” as follows: xu i = “[UNK]” if yi ̸= “O” ∩ϵ > p xi otherwise , (6) where constant p is a threshold and ϵ is uniformly sampled between 0 and 1. Forward Reconstruction In the forward reconstruction process, the forward pass of Eq. (1) is firstly applied on sequence Xu = [xu 1, xu 2, · · · , xu N] to obtain hidden states −→h u i . Then, a forward span representation, mf jk, of the local context between position k and j is obtained using RNN-minus feature (Wang and Chang, 2016) as follows: mf jk = −→ h u k −−→ h u j . (7) To apply VAE to reconstruct the local context, the mean µ and log-variance log σ are firstly computed from the above representation as follows: ( µf jk = Wµ 1 tanh(Wµ 0mf jk) log σf jk = Wσ 1 tanh(Wσ 0mf jk) , (8) where W∗ ∗are all learnable matrices. Then, the reparameterization trick in Eq. (4) is applied on µf jk and σf jk = exp(log σf jk) to obtain a global latent variable zf jk for the local context. To generate the i-th word in the local context sequence [xj+1, xj+2, · · · , xk−1], we first apply a RNN-decoder with its initial hidden state from the latent variable zf jk and the first observation from the embedding of “[SOS]” symbol to recursively obtain hidden state −→r f i as follows: −→r f i = −→f (xi, −→r f i−1), (9) This RNN-decoder specifically does parameter sharing with the forward pass RNN-encoder in Eq. (1). We then use softmax to compute the distribution of word at position l as −→ P vae i = Softmax(Wf g ∗rf i ), (10) where Wf g is a learnable matrix. Lastly, we compute KL distance and crossentropy for length-L local context sequence in Eq. (5) as follows:                KLf jk = X d ζ(µf jk[d], σf jk[d]), CEf jk = −1 L X i log(−→ P vae i [xi]) −→ L vae jk = −KLf jk −CEf jk, , (11) where d denotes hidden dimension index and the closed form KL divergence ζ is defined as ζ(µ, σ) = µ2 + σ −(1 + log σ). (12) Backward Reconstruction Same as the forward reconstruction, the backward reconstruction is applied on non-adjacent successive entities. The backward pass of Eq. (1) is firstly applied on the entitymasked sequence Xu. Once the backward span representation, mb kj, of the local context between position k and j is obtained as mb kj = ←− h u j −←− h u k, the same procedures of the above described forward reconstruction are conducted, except using 6445 the backward RNN-encoder ←−f (·) in lieu of the forward RNN-encoder in Eq. (9). The objective for local context reconstruction is J vae(X; θlcr, θrnn) = max θlcr,θrnn X jk −→ L vae jk + ←− L vae jk , (13) which is to maximize the ELBO w.r.t. parameters θlcr and θrnn. 3.2 Delexicalized Entity Identification For low-frequency entities, the delexicalized entity identification aims at obtaining frequency-agnostic and entity-type-specific representations. Delexicalization We first randomly substitute entity words in input sequence X with their corresponding labels as xd i = yi if yi ̸= “O” ∩ϵ > p xi otherwise , (14) where p is a threshold and ϵ is uniformly sampled from [0, 1]. We refer this to delexicalization (Wen et al., 2015), but insert randomness in it. Representation for Identification To obtain representation to identify whether an entity has been delexicalized to its label, we first use forward and backward RNN-encoders in Eq. (1) on the sentence Xd = [xd 1, xd 2, · · · , xd N] and obtain hidden states ←− h d i and −→ h d i for each position i. Their concatenation is hd i = ←− h d i ⊕−→ h d i . For position i in the original sequence without delexicalization, its concatenated hidden state hi = ←− h i ⊕−→ h i. For an entity with a span from position j to k, its representation ed jk is obtained from the following average pooling ed jk = 1 k −j + 1 X i hd i . (15) Average pooling is also applied on his to obtain ejk for the original entity with that span. Discriminator A multi-layer perceptron (MLP) based discriminator with parameter θD is employed to output a confidence score in [0, 1], indicating the probability of the delexicalization of an entity; i.e., ( pd jk = σ(vT d tanh(Wd ∗ed jk)) pjk = σ(vT d tanh(Wd ∗ejk)) , (16) where paramters vd and Wd are learnable and σ(x) is Sigmoid function 1 1+exp(−x). Algorithm 1: Training Algorithm Input: Dataset S, θrnn, θemb, θlcr, θD. 1 repeat 2 Sample a minibatch with pairs (X, Y ). 3 Update θD by gradient descent according to Eq. (17). 4 Update θlcr and θrnn by gradient ascent to joint maximization of J vae + J at according to Eqs. (13) and (17). 5 Update θrnn and θemb by gradient ascent according to Eq. (2). 6 until Convergence; Output: θrnn, θemb, θlcr, θD. Following the principle of adversarial training, we develop the following minimax objective to train RNN model θrnn and the discriminator θD: J at(X, Y ; θD, θrnn) = (17) minθD maxθrnn P jk log(pjk) + log(1 −pd jk) , which aims at fooling a strong discriminator θD with parameter θrnn optimized, leading to frequency-agnostics. 4 Training Algorithm Notice that the model has three modules with their own objectives. We update their parameters jointly using Algorithm 1. The algorithm first improves discriminator θD to identify delexicalized items. It then updates θlcr and θrnn with joint optimization J vae and J at to improve θrnn to fool the discriminator. As VAE optimization of J vae has posterior collapse problem, we adopt KL cost annealing strategy and word dropout techniques (Bowman et al., 2015). Finally, the algorithm updates both of θrnn and θemb in Bi-LSTM+CRF by gradient ascent according to Eq. (2). Note that θlcr shares the same parameters with θrnn and θemb. During experiments, we also find it is beneficial to have a few epochs of pretraining of parameters θrnn and θemb with optimization of Eq. (2). 5 Experiments This section compares the proposed model against state-of-the-art models on benchmark datasets. 5.1 Settings Slot Filling We use available ATIS dataset (Tur et al., 2010) and SNIPS dataset (Coucke et al., 6446 Models ATIS SNIPS CoNLL-03 Lample et al. (2016) Bi-LSTM + CRF w/ char 95.17 93.71 90.94 Liu et al. (2018) LM-LSTM-CRF 95.33 94.07 91.24 Liu et al. (2019a) GCDT 95.98 95.03 91.96 Qin et al. (2019) Stack-propagation† 95.9 94.2 Liu et al. (2019b) CM-Net† 95.82 97.15 This Work Bi-LSTM + CRF 95.02 93.37 90.11 w/ external resources‡ 95.67 94.76 91.04 w/ BERT fine-tuned embedding∗ 95.94 96.15 92.53 w/ Proposed Methods 96.01 97.20 92.67 Table 2: Sequence labeling test results of baselines and the proposed model on benchmark datasets. ∗refers to fine tuning on pretrained large models. † refers to using multi-task learning. ‡ refers to adopting external resources. The improvements over all prior methods are statistically significant with p < 0.01 under t-test. 2018). Meanwhile, we follow the same setup as (Goo et al., 2018; Qin et al., 2019). NER We use the public CoNLL-03 dataset (Sang and Meulder, 2003) as in (Huang et al., 2015; Lample et al., 2016; Liu et al., 2019a). The dataset is tagged with four named entity types, including PER, LOC, ORG, and MISC. Baselines We compare the proposed model with five types of methods: 1) strong baseline (Lample et al., 2016) use character embedding to improve sequence tagger; 2) recent state-of-the-art models for slot filling (Qin et al., 2019; Liu et al., 2019b) that utilize multi-task learning to incorporate additional information from intent detection; 3) recent state-of-the-art models, including Liu et al. (2018) and Liu et al. (2019a), for NER; 4) Bi-LSTM + CRF model augmented with external resources, (i.e., POS tagging using Stanford Parser2); and 5) Bi-LSTM + CRF model with word embedding from fine-tuned BERTLARGE (Devlin et al., 2018). Results are reported in F1 scores. We follow most of the baseline performances reported in (Lample et al., 2016; Liu et al., 2019b; Qin et al., 2019; Liu et al., 2019a) and rerun the open source toolkit NCRFpp3, LM-LSTM-CRF4, and GCDT5 on slot filling tasks6. Implementation Details We use the same configuration setting for all datasets. The hidden dimensions are set as 500. We apply dropout to hid2https://nlp.stanford.edu/software/lex-parser.shtml. 3https://github.com/jiesutd/NCRFpp. 4https://github.com/LiyuanLucasLiu/LM-LSTM-CRF. 5https://github.com/Adaxry/GCDT. 6Few results are not available for comparison as Qin et al. (2019); Liu et al. (2019b) are for mult-task learning of intent detection and slot filling. den states with a rate of 0.3. L2 regularization is set as 1 × 10−6 to avoid overfit. Following (Liu et al., 2018, 2019a,b), we adopt the cased, 300d Glove (Pennington et al., 2014) to initialize word embeddings. We utilize Adam algorithm (Kingma and Ba, 2015) to optimize the models and adopt the suggested hyper-parameters. 5.2 Main Results The main results of the proposed model on ATIS and CoNLL-03 are illustrated in Table 2. The proposed model outperforms all other models on all tasks by a substantial margin. On slot filling tasks, the model obtains averaged improvements of 0.15 points on ATIS and 1.53 points on SNIPS over CMNet and Stack-propagation, without using extra information from jointly modeling of slots and intents in these models. In comparison to the prior stateof-the-art models of GCDT, the improvements are 0.03 points on ATIS, 2.17 points on SNIPS and 0.71 points on CoNLL-03. Compared with strong baseline (Lample et al., 2016) that utilizes char embedding to improve BiLSTM + CRF, the gains are even larger. The model obtains improvements of 0.84 points on ATIS, 3.49 points on SNIPS and 1.73 points on CoNLL-03, over Bi-LSTM + CRF and LM-LSTM-CRF. Finally, we have tried improving the baseline BiLSTM+CRF in our model with external resources of lexical information, including part-of-speech tags, chunk tags and character embeddings. However, their F1 scores are consistently below the proposed model by an average of 1.47 points. We also replace word embeddings in Bi-LSTM+CRF with those from fine-tuned BERTLARGE but its results are worse than the proposed model, by 0.07 6447 Method SNIPS Bi-LSTM + CRF + LCR + DEI 97.20 w/o LCR 94.37 w/o VAE, w/ LSTM-LM 96.02 w/o OOV masking 95.63 w/o DEI 95.82 w/o LCR, DEI (Bi-LSTM + CRF) 93.37 Table 3: Ablation experiments for local context reconstruction (LCR) and delexicalized entity identification (DEI). LCR includes VAE and OOV masking. points, 1.05 points and 0.14 points, respectively, on ATIS, SNIPS, and CoNLL-03. 6 Analysis It is noteworthy that the substantial improvements by the model are obtained without using external resources nor large pre-trained models. Keys to its success are local context reconstruction and delexicalized entity identification. This section reports our analysis of these modules. 6.1 Ablation Study Local Context Reconstruction (LCR) We first examine the impact bought by the LCR process. In Table 3, we show that removing LCR (w/o LCR) hurts performance significantly on SNIPS. We then study if constructing local context in LCR using a traditional deterministic encoder-decoder can be equally effectively as using VAE. We make a good faith attempt of using LSTM-based language model (Sundermeyer et al., 2012) to generate local context directly from local context representation (w/o VAE, w/ LSTM-LM). This does improve results over that without LCR at all, indicating the information from reconstructing local context is indeed useful. However, its F1 score is still far worse than that of using VAE. This confirms that VAE is superior to deterministic model in dealing with the inherently one-to-many generation of local context from entities. Lastly, we examine the impact of OOV masking and observe that F1 score without it (w/o OOV masking) drops about 1.6 point below the model. We attribute this improvement from OOV masking to mitigating the entity distribution discrepancy between training and test. Delexicalized Entity Identification (DEI) Removing delexicalized entity identification (w/o DEI) performs worse than the model, with large drop of 1.38 point on SNIPS. Method CoNLL-03 OOV LF LM-LSTM-CRF 2049 1136 GCDT 2073 1149 Bi-LSTM + CRF 2041 1135 w/ external resource‡ 2052 1143 w/ BERT fine-tuned embedding∗ 2084 1153 Bi-LSTM + CRF + LCR 2112 1139 Bi-LSTM + CRF + DEI 2043 1169 Bi-LSTM + CRF + LCR + DEI 2124 1181 Total 2509 1363 Table 4: Numbers of OOV and LF entities that are correctly labeled. ∗refers to fine tuning on pretrained large models. ‡ refers to adopting external resources. These results show that both local context reconstruction and delexicalized entity identification contribute greatly to the improved performance by the proposed model. Because both LCR and DEI share the same RNN-encoder as the baseline BiLSTM, the information from reconstructing local context and fooling the discriminator of delexicalization is useful for the Bi-LSTM to better predict sequence labels. 6.2 Rare Entity Handling In this section, we compare models specifically by the numbers of OOV and LF entities they can recall correctly. Such comparison reveals the capability of each model in handling rare entities. Results are presented in Table 4. In the case of without using any external resource and pre-trained models, the proposed model recalls 3.66% more OOV entities and 3.96% more LF entities than LM-LSTM-CRF. This gain is similar when comparing against Bi-LSTM+CRF. Furthermore, the proposed model also recalls more rare entities than GCDT, a recent state-of-the-art model in NER. Separately using LCR or DEI improves performance over baseline Bi-LSTM+CRF. Their gains are complementary as results show that jointly applying LCR and DEI obtains the best performance. These results demonstrate convincingly the capability of local context reconstruction and delexicalized entity identification in rare entities. Importantly, results in the last two rows reveal that potentially large improvements can be potentially achieved since there are still 15.34% of OOV entities and 13.35% of LF entities not recalled. 6448 Figure 3: Visualization of learned representations on CoNLL-03 test dataset. Entity types are represented in different shapes with red for PER, blue for ORG, green for LOC and orange for MISC. Rare entities are represented using bigger points. The points with ”X” are for the delexicalized entities. 6.3 Representation for Delexicalized Entity Identification We visualize the learned representation at Eq. (15) using t-SNE (Maaten and Hinton, 2008) in Figure 3. It shows 2-dimensional projections of randomly sampled 800 entities on CoNLL-03 dataset. Figure 3 clearly shows separability of entities by their entity types but no separations among lowfrequency and frequent entities. This observation is consistent to the mini-max objective in Eq. (17) to learn entity-type-specific and frequency-agnostic representations. 6.4 Handling Data Scarcity This section investigates the proposed model on data scarcity. On ATIS, the percentage of training samples are reduced down to 20% of the original size, with a reduction size of 20%. This setting is challenging and few previous works have experimented. Results in Figure 3 show that the proposed model consistently outperforms other models, especially in low-resource conditions. Furthermore, reductions of performance from the proposed model are much smaller, in comparison to other models. For instance, at percentage 40%, the proposed model only lose 1.17% of its best F1 score whereas GCDT loses 3.62% of its F1 score. This suggests that the proposed model is more robust to low resource than other models. Figure 4: Comparisons with respect to different percentage of training data on ATIS. 7 Related Work Neural sequence labeling has been an active field in NLP, and we briefly review recently proposed approaches related to our work. Slot Filling and NER Neural sequence labeling has been applied to slot filling (Mesnil et al., 2014; Zhang and Wang, 2016; Liu and Lane, 2016; Qin et al., 2019) and NER (Huang et al., 2015; Strubell et al., 2017; Liu et al., 2018; Devlin et al., 2018; Liu et al., 2019a). For slot filling, multi-task learning for joint slot filling and intent detection has been dominating in the recent literature, for example (Liu and Lane, 2016). The recent work in (Liu et al., 2019b) employs a collaborative memory network to further model the semantic correlations among words, slots and intents jointly. For NER, recent works use explicit architecture to incorporate information such as global context (Liu et al., 2019a) or conduct optimal architecture searches (Jiang et al., 2019). The best performing models have been using pre-training models on large corpus (Baevski et al., 2019) or incorporating fine-tuning on existing pre-trained models (Liu et al., 2019a) such as BERT (Devlin et al., 2018). External Resource This approach to handle rare entities includes feature engineering methods such as incorporating extra knowledge from part-ofspeech tags (Huang et al., 2015) or character embeddings (Li et al., 2018). Extra knowledge also includes tags from public tagger (Manning et al., 2014). Multi-task learning has been effective in incorporating additional label information through multiple objectives. Joint slot filling and intent detection have been used in (Zhang and Wang, 2016; 6449 Qin et al., 2019; Zhang et al., 2019). Joint part-ofspeech tagging and NER have been used in (Lin et al., 2018). Transfer Learning This approach refers to methods that transfer knowledge from highresources to low-resources (Zhou et al., 2019) or use models pretrained on large corpus to benefit downstream tasks (Devlin et al., 2018; Liu et al., 2019a). The most recent work in (Zhou et al., 2019) applies adversarial training that uses a resourceadversarial discriminator to improve performances on low-resource data. 8 Conclusion We have presented local context reconstruction for OOV entities and delexicalized entity identification for low-frequency entities to address the rare entity problem. We adopt variational autoencoder to learn a stochastic reconstructor for the reconstruction and adversarial training to extract frequency-agnostic and entity-type-specific features. Extensive experiments have been conducted on both slot filling and NER tasks on three benchmark datasets, showing that sequence labeling using the proposed methods achieve new state-of-the-art performances. Importantly, without using external knowledge nor fine tuning of large pretrained models, our methods enable a sequence labeling model to outperform models fine-tuned on BERT. Our analysis also indicates large potential of further performance improvements by exploiting OOV and LF entities. Acknowledgments This work was done while the first author did internship at Ant Financial. We thank anonymous reviewers for valuable suggestions. References Isabelle Augenstein, Leon Derczynski, and Kalina Bontcheva. 2017. Generalisation in named entity recognition: A quantitative analysis. Computer Speech & Language, 44:61–83. Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, and Michael Auli. 2019. Clozedriven pretraining of self-attention networks. arXiv preprint arXiv:1903.07785. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Issam Bazzi. 2002. Modelling out-of-vocabulary words for robust speech recognition. Ph.D. thesis, Massachusetts Institute of Technology. Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. 2015. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349. Ronan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. pages 2493–2537. Alice Coucke, Alaa Saade, Adrien Ball, Th´eodore Bluche, Alexandre Caulier, David Leroy, Cl´ement Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. NAACL, pages 4171–4186. Chengyue Gong, Di He, Xu Tan, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2018. Frage: Frequency-agnostic word representation. In Advances in neural information processing systems, pages 1334–1345. Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and YunNung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 753–757. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Yufan Jiang, Chi Hu, Tong Xiao, Chunliang Zhang, and Jingbo Zhu. 2019. Improved differentiable architecture search for language modeling and named entity recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3576–3581. Diederick P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR). 6450 Diederik P Kingma and Max Welling. 2015. Autoencoding variational bayes. ICLR. John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270, San Diego, California. Association for Computational Linguistics. Changliang Li, Liang Li, and Ji Qi. 2018. A selfattentive model with gate mechanism for spoken language understanding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3824–3833. Ying Lin, Shengqi Yang, Veselin Stoyanov, and Heng Ji. 2018. A multi-lingual multi-task architecture for low-resource sequence labeling. ACL, pages 799– 809. Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint intent detection and slot filling. arXiv preprint arXiv:1609.01454. Liyuan Liu, Jingbo Shang, Xiang Ren, Frank Fangzheng Xu, Huan Gui, Jian Peng, and Jiawei Han. 2018. Empower sequence labeling with task-aware neural language model. In Thirty-Second AAAI Conference on Artificial Intelligence. Yijin Liu, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, and Jie Zhou. 2019a. GCDT: A global context enhanced deep transition architecture for sequence labeling. ACL, pages 2431–2441. Yijin Liu, Fandong Meng, Jinchao Zhang, Jie Zhou, Yufeng Chen, and Jinan Xu. 2019b. CMNet: A novel collaborative memory network for spoken language understanding. arXiv preprint arXiv:1909.06937. Ying Luo, Fengshun Xiao, and Hai Zhao. 2020. Hierarchical contextualized representation for named entity recognition. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. ACL, pages 1064–1074. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60. Gr´egoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, et al. 2014. Using recurrent neural networks for slot filling in spoken language understanding. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(3):530–539. Takeru Miyato, Andrew M Dai, and Ian Goodfellow. 2016. Adversarial training methods for semisupervised text classification. Takeru Miyato, Shin ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. 2015. Distributional smoothing with virtual adversarial training. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Libo Qin, Wanxiang Che, Yangming Li, Haoyang Wen, and Ting Liu. 2019. A stack-propagation framework with token-level intent detection for spoken language understanding. arXiv preprint arXiv:1909.02188. Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative mod- els. ICML. Erik F Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language- independent named entity recognition. arxiv preprint cs/0306050. Mike Schuster and Kuldip K. Paliwal. 1997. Bidirectional recurrent neural networks. Signal Processing, IEEE Transactions on, 45:2673–2681. Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate sequence labeling with iterated dilated convolutions. EMNLP, 138:2670–2680. Martin Sundermeyer, Ralf Schl¨uter, and Hermann Ney. 2012. LSTM neural networks for language modeling. In Thirteenth annual conference of the international speech communication association. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. 6451 Gokhan Tur, Dilek Hakkani-T´ur, and Larry Heck. 2010. What is left to be understood in ATIS? IEEE Spoken Language Technology Workshop (SLT), pages 19–24. Wenhui Wang and Baobao Chang. 2016. Graph-based dependency parsing with bidirectional lstm. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2306–2315. Tsung-Hsien Wen, Milica Gasic, Dongho Kim, Nikola Mrksic, Pei-Hao Su, David Vandyke, and Steve Young. 2015. Stochastic language generation in dialogue using recurrent neural networks with convolutional sentence reranking. arXiv preprint arXiv:1508.01755. Yingwei Xin, Ethan Hart, Vibhuti Mahajan, and JeanDavid Ruvini. 2018. Learning better internal structure of words for sequence labeling. Chenwei Zhang, Yaliang Li, Nan Du, Wei Fan, and Philip S. Yu. 2019. Joint slot filling and intent detection via capsule neural networks. ACL, pages 5259– 5267. Xiaodong Zhang and Houfeng Wang. 2016. A joint model of intent determination and slot filling for spoken language understanding. In IJCAI, volume 16, pages 2993–2999. Joey Tianyi Zhou, Hao Zhang, Di Jin, Hongyuan Zhu, Meng Fang, Rick Siow Mong Goh, and Kenneth Kwok. 2019. Dual adversarial neural transfer for low-resource named entity recognition. ACL, pages 3461–3471.
2020
574
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6452–6459 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6452 Instance-Based Learning of Span Representations: A Case Study through Named Entity Recognition Hiroki Ouchi1,2 Jun Suzuki2,1 Sosuke Kobayashi2,3 Sho Yokoi2,1 Tatsuki Kuribayashi2,4 Ryuto Konno2 Kentaro Inui2,1 1 RIKEN 2 Tohoku University 3 Preferred Networks, Inc. 4 Langsmith, Inc. [email protected] {jun.suzuki,sosk,yokoi,kuribayashi,ryuto,inui}@ecei.tohoku.ac.jp Abstract Interpretable rationales for model predictions play a critical role in practical applications. In this study, we develop models possessing interpretable inference process for structured prediction. Specifically, we present a method of instance-based learning that learns similarities between spans. At inference time, each span is assigned a class label based on its similar spans in the training set, where it is easy to understand how much each training instance contributes to the predictions. Through empirical analysis on named entity recognition, we demonstrate that our method enables to build models that have high interpretability without sacrificing performance. 1 Introduction Neural networks have contributed to performance improvements in structured prediction. Instead, the rationales underlying the model predictions are difficult for humans to understand (Lei et al., 2016). In practical applications, interpretable rationales play a critical role for driving human’s decisions and promoting human-machine cooperation (Ribeiro et al., 2016). With this motivation, we aim to build models that have high interpretability without sacrificing performance. As an approach to this challenge, we focus on instance-based learning. Instance-based learning (Aha et al., 1991) is a machine learning method that learns similarities between instances. At inference time, the class labels of the most similar training instances are assigned to the new instances. This transparent inference process provides an answer to the following question: Which points in the training set most closely resemble a test point or influenced the prediction? This is categorized into example-based explanations (Plumb et al., 2018; Baehrens et al., 2010). Recently, despite its preferable property, it has received little attention and been underexplored. This study presents and investigates an instancebased learning method for span representations. A span is a unit that consists of one or more linguistically linked words. Why do we focus on spans instead of tokens? One reason is relevant to performance. Recent neural networks can induce good span feature representations and achieve high performance in structured prediction tasks, such as named entity recognition (NER) (Sohrab and Miwa, 2018; Xia et al., 2019), constituency parsing (Stern et al., 2017; Kitaev et al., 2019), semantic role labeling (SRL) (He et al., 2018; Ouchi et al., 2018) and coreference resolution (Lee et al., 2017). Another reason is relevant to interpretability. The tasks above require recognition of linguistic structure that consists of spans. Thus, directly classifying each span based on its representation is more interpretable than token-wise classification such as BIO tagging, which reconstructs each span label from the predicted token-wise BIO tags. Our method builds a feature space where spans with the same class label are close to each other. At inference time, each span is assigned a class label based on its neighbor spans in the feature space. We can easily understand why the model assigned the label to the span by looking at its neighbors. Through quantitative and qualitative analysis on NER, we demonstrate that our instancebased method enables to build models that have high interpretability and performance. To sum up, our main contributions are as follows. • This is the first work to investigate instancebased learning of span representations.1 • Through empirical analysis on NER, we demonstrate our instance-based method enables to build models that have high interpretability without sacrificing performance. 1Our code is publicly available at https://github. com/hiroki13/instance-based-ner.git. 6453 2 Related Work Neural models generally have a common technical challenge: the black-box property. The rationales underlying the model predictions are opaque for humans to understand. Many recent studies have tried to look into classifier-based neural models (Ribeiro et al., 2016; Lundberg and Lee, 2017; Koh and Liang, 2017). In this paper, instead of looking into the black-box, we build interpretable models based on instance-based learning. Before the current neural era, instance-based learning, sometimes called memory-based learning (Daelemans and Van den Bosch, 2005), was widely used for various NLP tasks, such as part-of-speech tagging (Daelemans et al., 1996), dependency parsing (Nivre et al., 2004) and machine translation (Nagao, 1984). For NER, some instance-based models have been proposed (Tjong Kim Sang, 2002; De Meulder and Daelemans, 2003; Hendrickx and van den Bosch, 2003). Recently, despite its high interpretability, this direction has not been explored. One exception is Wiseman and Stratos (2019), which used instance-based learning of token representations. Due to BIO tagging, it faces one technical challenge: inconsistent label prediction. For example, an entity candidate “World Health Organization” can be assigned inconsistent labels such as “B-LOC I-ORG I-ORG,” whereas the groundtruth labels are “B-ORG I-ORG I-ORG.” To remedy this issue, they presented a heuristic technique for encouraging contiguous token alignment. In contrast to such token-wise prediction, we adopt span-wise prediction, which can naturally avoid this issue because each span is assigned one label. NER is generally solved as (i) sequence labeling or (ii) span classification.2 In the first approach, token features are induced by using neural networks and fed into a classifier, such as conditional random fields (Lample et al., 2016; Ma and Hovy, 2016; Chiu and Nichols, 2016). One drawback of this approach is the difficulty dealing with nested entities.3 By contrast, the span classification approach, adopted in this study, can straightforwardly solve nested NER (Finkel and Manning, 2009; Sohrab and Miwa, 2018; Xia et al., 2019).4 2Very recently, a hybrid model of these two approaches has been proposed by Liu et al. (2019). 3Some studies have sophisticated sequence labeling models for nested NER (Ju et al., 2018; Zheng et al., 2019). 4There is an approach specialized for nested NER using hypergraphs (Lu and Roth, 2015; Muis and Lu, 2017; Katiyar and Cardie, 2018; Wang and Lu, 2018). 3 Instance-Based Span Classification 3.1 NER as span classification NER can be solved as multi-class classification, where each of possible spans in a sentence is assigned a class label. As we mentioned in Section 2, this approach can naturally avoid inconsistent label prediction and straightforwardly deal with nested entities. Because of these advantages over tokenwise classification, span classification has been gaining a considerable attention (Sohrab and Miwa, 2018; Xia et al., 2019). Formally, given an input sentence of T words X = (w1, w2, . . . , wT ), we first enumerate possible spans S(X), and then assign a class label y ∈Y to each span s ∈S(X). We will write each span as s = (a, b), where a and b are word indices in the sentence: 1 ≤a ≤b ≤T. Consider the following sentence. Franz1 Kafka2 is3 a4 novelist5 [ PER ] Here, the possible spans in this sentence are S(X) = {(1, 1), (1, 2), (1, 3), . . . , (4, 5), (5, 5)}. “Franz Kafka,” denoted as s = (1, 2), is assigned the person type entity label (y = PER). Note that the other non-entity spans are assigned the null label (y = NULL). For example, “a novelist,” denoted as s = (4, 5), is assigned NULL. In this way, the NULL label is assigned to non-entity spans, which is the same as the O tag in the BIO tag set. The probability that each span s is assigned a class label y is modeled by using softmax function: P(y|s) = exp(score(s, y)) X y′∈Y exp(score(s, y′)) . Typically, as the scoring function, the inner product between each label weight vector wy and span feature vector hs is used: score(s, y) = wy · hs . The score for the NULL label is set to a constant, score(s, y = NULL) = 0, similar to logistic regression (He et al., 2018). For training, the loss function we minimize is the negative log-likelihood: L = − X (X,Y )∈D X (s,y)∈S(X,Y ) log P(y|s) , where S(X, Y ) is a set of pairs of a span s and its ground-truth label y. We call this kind of models that use label weight vectors for classification classifier-based span model. 6454 [Haruki Murakami] [wrote] [Kafka on the Shore] [in] [Hawaii] PER NULL MISC NULL LOC [Born in] [Moscow] , [Dostoevsky] [was introduced to] … NULL LOC PER NULL [Franz Kafka] is a novelist Training Set Encoder NULL PER LOC ? MISC argmax Vectorize Compute similarity Figure 1: Illustration of our instance-based span model. An entity candidate “Franz Kafka” is used as a query and vectorized by an encoder. In the vector space, similarities between all pairs of the candidate (s) and the training instances (s1, s2, . . . , s9) are computed, respectively. Based on the similarities, the label probability (distribution) is computed, and the label with the highest probability PER is assigned to “Franz Kafka.” 3.2 Instance-based span model Our instance-based span model classifies each span based on similarities between spans. In Figure 1, an entity candidate “Franz Kafka” and the spans in the training set are mapped onto the feature vector space, and the label distribution is computed from the similarities between them. In this inference process, it is easy to understand how much each training instance contributes to the predictions. This property allows us to explain the predictions by specific training instances, which is categorized into example-based explanations (Plumb et al., 2018). Formally, within the neighbourhood component analysis framework (Goldberger et al., 2005), we define the neighbor span probability that each span si ∈S(X) will select another span sj as its neighbor from candidate spans in the training set: P(sj|si, D′) = exp(score(si, sj)) X sk∈S(D′) exp(score(si, sk)) . (1) Here, we exclude the input sentence X and its ground-truth labels Y from the training set D: D′ = D \ {(X, Y )}, and regard all other spans as candidates: S(D′) = {s ∈S(X′)|(X′, Y ′) ∈D′}. The scoring function returns a similarity between the spans si and sj. Then we compute the probability that a span si will be assigned a label yi: P(yi|si) = X sj∈S(D′,yi) P(sj|si, D′) . (2) Here, S(D′, yi) = {sj ∈D′| yi = yj}, so the equation indicates that we sum up the probabilities of the neighbor spans that have the same label as the span si. The loss function we minimize is the negative log-likelihood: L = − X (X,Y )∈D X (si,yi)∈S(X,Y ) log P(yi|si) , where S(X, Y ) is a set of pairs of a span si and its ground-truth label yi. At inference time, we predict ˆyi to be the class label with maximal marginal probability: ˆyi = arg max y∈Y P(y|si) , where the probability P(y|si) is computed for each of the label set y ∈Y. Efficient neighbor probability computation The neighbor span probability P(sj|si, D′) in Equation 1 depends on the entire training set D′, which leads to heavy computational cost. As a remedy, we use random sampling to retrieve K sentences D′′ = {(X′ k, Y ′ k)}K k=0 from the training set D′. At training time, we randomly sample K sentences for each mini-batch at each epoch. This simple technique realizes time and memory efficient training. In our experiments, it takes less than one day to train a model on a single GPU5. 5NVIDIA DGX-1 with Tesla V100. 6455 4 Experiments 4.1 Experimental setup Data We evaluate the span models through two types of NER: (i) flat NER on the CoNLL-2003 dataset (Tjong Kim Sang and De Meulder, 2003) and (ii) nested NER on the GENIA dataset6 (Kim et al., 2003). We follow the standard trainingdevelopment-test splits. Baseline We use a classifier-based span model (Section 3.1) as a baseline. Only the difference between the instance-based and classifier-based span models is whether to use softmax classifier or not. Encoder and span representation We adopt the encoder architecture proposed by Ma and Hovy (2016), which encodes each token of the input sentence wt ∈X with word embedding and characterlevel CNN. The encoded token representations w1:T = (w1, w2, . . . , wT ) are fed to bidirectional LSTM for computing contextual ones −→ h 1:T and ←− h 1:T . From them, we create hlstm s for each span s = (a, b) based on LSTM-minus (Wang and Chang, 2016). For flat NER, we use the representation hlstm s = [−→ h b −−→ h a−1, ←− h a −←− h b+1]. For nested NER, we use hlstm s = [−→ h b −−→ h a−1, ←− h a − ←− h b+1, −→ h a + −→ h b, ←− h a + ←− h b].7 We then multiply hlstm s with a weight matrix W and obtain the span representation: hs = W hlstm s . For the scoring function in Equation 1 in the instance-based span model, we use the inner product between a pair of span representations: score(si, sj) = hsi · hsj. Model configuration We train instance-based models by using K = 50 training sentences randomly retrieved for each mini-batch. At test time, we use K = 50 nearest training sentences for each sentence based on the cosine similarities between their sentence vectors8. For the word embeddings, we use the GloVe 100-dimensional embeddings (Pennington et al., 2014) and the BERT embeddings (Devlin et al., 2019).9 6We use the same one pre-processed by Zheng et al. (2019) at https://github.com/thecharm/ boundary-aware-nested-ner 7We use the different span representation from the one used for flat NER because concatenating the addition features, −→ h a +−→ h b and ←− h a +←− h b, to the subtraction features improves performance in our preliminary experiments. 8For each sentence X = (w1, w2, . . . , wT ), its sentence vector is defined as the vector averaged over the word embeddings (GloVe) within the sentence: 1 T P t wemb t . 9Details on the experimental setup are described in Appendices A.1. Classifier-based Instance-based GloVe Flat NER 90.68 ±0.25 90.73 ±0.07 Nested NER 73.76 ±0.35 74.20 ±0.16 BERT Flat NER 90.48 ±0.18 90.48 ±0.07 Nested NER 73.27 ±0.19 73.92 ±0.20 Table 1: Comparison between classifier-based and instance-based span models. Cells show the F1 scores and standard deviations on each test set. F1 score 80.0 82.0 84.0 86.0 88.0 90.0 92.0 94.0 96.0 98.0 100.0 1/8 1/4 1/2 1 Classifier-based Instance-based Data Size Figure 2: Performance on the CoNLL-2003 development set for different amounts of the training set. 4.2 Quantitative analysis We report averaged F1 scores across five different runs of the model training with random seeds. Overall F1 scores We investigate whether or not our instance-based span model can achieve competitive performance with the classifier-based span model. Table 1 shows F1 scores on each test set.10 Consistently, the instance-based span model yielded comparable results to the classifier-based span model. This indicates that our instance-based learning method enables to build NER models without sacrificing performance. Effects of training data size Figure 2 shows F1 scores on the CoNLL-2003 development set by the models trained on full-size, 1/2, 1/4 and 1/8 of the training set. We found that (i) performance of both models gradually degrades when the size of the training set is smaller and (ii) both models yield very competitive performance curves. 10The models using GloVe yielded slightly better results than those using BERT. One possible explanation is that subword segmentation is not so good for NER. In particular, tokens in upper case are segmented into too small elements, e.g., “LEICESTERSHIRE” →“L,” “##EI,” “##CE,” “##ST,” “##ER,” “##S,” “##H,” “##IR,” “##E.” 6456 QUERY ... [Tom Moody] took six for 82 but ... Classifier-based 1 PER ... [Billy Mayfair] and Paul Goydos and ... 2 NULL ... [Billy Mayfair and Paul Goydos] and ... 3 NULL ... [Billy Mayfair and Paul Goydos and] ... 4 NULL ... [Billy] Mayfair and Paul Goydos and ... 5 NULL ... [Ducati rider Troy Corser] , last year ... Instance-based 1 PER [Ian Botham] began his test career ... 2 PER ... [Billy Mayfair] and Paul Goydos and ... 3 PER ... [Mark Hutton] scattered four hits ... 4 PER ... [Steve Stricker] , who had a 68 , and ... 3 PER ... [Darren Gough] polishing off ... Table 2: Example of span retrieval. An entity candidate “Tom Moody” in the CoNLL-2003 development set used as a query for retrieving five nearest neighbors from the training set. QUERY ... spokesman for [Air France] ’s ... Pred: LOC Gold: ORG 1 LOC ... [Colombia] turned down American ’s ... 2 LOC ... involving [Scotland] , Wales , ... 3 LOC ... signed in [Nigeria] ’s capital Abuja ... 4 LOC ... in the West Bank and [Gaza] . 5 LOC ... on its way to [Romania] ... Table 3: Example of an error by the instance-based span model. Although the gold label is ORG (Organization), the wrong label LOC (Location) is assigned. 4.3 Qualitative analysis To better understand model behavior, we analyze the instance-based model using GloVe in detail. Examples of retrieved spans The span feature space learned by our method can be applied to various downstream tasks. In particular, it can be used as a span retrieval system. Table 2 shows five nearest neighbor spans of an entity candidate “Tom Moody.” In the classifier-based span model, personrelated but non-entity spans were retrieved. By contrast, in the instance-based span model, person (PER) entities were consistently retrieved.11 This tendency was observed in many other cases, and we confirmed that our method can build preferable feature spaces for applications. Errors analysis The instance-based span model tends to wrongly label spans that includes location or organization names. For example, in Table 3, the wrong label LOC (Location) is assigned to “Air France” whose gold label is ORG (Organization). 11The query span “Tom moody” was a cricketer at that time, and some neighbors, “Ian Botham” and “Darren Gough,” were also cricketers. Classifier-based Instance-based GloVe 94.91 ±0.11 94.96 ±0.06 BERT 96.20 ±0.03 96.24 ±0.04 Table 4: Comparison in syntactic chunking. Cells show F1 and standard deviations on the CoNLL-2000 test set. Note that by looking at the neighbors, we can understand that country or district entities confused the model. This implies that prediction errors are easier to analyze because the neighbors are the rationales of the predictions. 4.4 Discussion Generalizability Are our findings in NER generalizable to other tasks? To investigate it, we perform an additional experiment on the CoNLL-2000 dataset (Tjong Kim Sang and Buchholz, 2000) for syntactic chunking.12 While this task is similar to NER in terms of short-span classification, the class labels are based on syntax, not (entity) semantics. In Table 4, the instance-based span model achieved competitive F1 scores with the classifier-based one, which is consistent with the NER results. This suggests that our findings in NER are likely to generalizable to other short-span classification tasks. Future work One interesting line of future work is an extension of our method to span-to-span relation classification, such as SRL and coreference resolution. Another potential direction is to apply and evaluate learned span features to downstream tasks requiring entity knowledge, such as entity linking and question answering. 5 Conclusion We presented and investigated an instance-based learning method that learns similarity between spans. Through NER experiments, we demonstrated that the models build by our method have (i) competitive performance with a classifier-based span model and (ii) interpretable inference process where it is easy to understand how much each training instance contributes to the predictions. Acknowledgments This work was partially supported by JSPS KAKENHI Grant Number JP19H04162 and JP19K20351. We would like to thank the members of Tohoku NLP Laboratory and the anonymous reviewers for their insightful comments. 12The models are trained in the same way as in nested NER. 6457 References David W Aha, Dennis Kibler, and Marc K Albert. 1991. Instance-based learning algorithms. Machine learning, 6(1):37–66. David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and KlausRobert M ˜Aˇzller. 2010. How to explain individual classification decisions. Journal of Machine Learning Research, 11(Jun):1803–1831. Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, 4:357–370. Walter Daelemans and Antal Van den Bosch. 2005. Memory-based language processing. Cambridge University Press. Walter Daelemans, Jakub Zavrel, Peter Berck, and Steven Gillis. 1996. MBT: A memory-based part of speech tagger-generator. In Proceedings of Fourth Workshop on Very Large Corpora. Fien De Meulder and Walter Daelemans. 2003. Memory-based named entity recognition using unannotated data. In Proceedings of HLT-NAACL, pages 208–211. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171–4186. Jenny Rose Finkel and Christopher D. Manning. 2009. Nested named entity recognition. In Proceedings of EMNLP, pages 141–150. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of AISTATS, pages 249– 256. Jacob Goldberger, Geoffrey E Hinton, Sam T Roweis, and Ruslan R Salakhutdinov. 2005. Neighbourhood components analysis. In Proceedings of NIPS, pages 513–520. Alan Graves, Navdeep Jaitly, and Abdel-rahman Mohamed. 2013. Hybrid speech recognition with deep bidirectional LSTM. In Proceedings of Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop. Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018. Jointly predicting predicates and arguments in neural semantic role labeling. In Proceedings of ACL, pages 364–369. Iris Hendrickx and Antal van den Bosch. 2003. Memory-based one-step named-entity recognition: Effects of seed list features, classifier stacking, and unannotated data. In Proceedings of CoNLL, pages 176–179. Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In Proceedings of NAACL-HLT, pages 1446–1459. Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of NAACL-HLT, pages 861–871. J-D Kim, Tomoko Ohta, Yuka Tateisi, and Junichi Tsujii. 2003. Genia corpusa semantically annotated corpus for bio-textmining. Bioinformatics, 19(suppl 1):i180–i182. D.P. Kingma and J. Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980. Nikita Kitaev, Steven Cao, and Dan Klein. 2019. Multilingual constituency parsing with self-attention and pre-training. In Proceedings of ACL, pages 3499– 3505. Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In Proceedings of ICML, pages 1885–1894. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL-HLT, pages 260–270. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. In Proceedings of EMNLP, pages 188–197. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of EMNLP, pages 107–117. Tianyu Liu, Jin-Ge Yao, and Chin-Yew Lin. 2019. Towards improving neural named entity recognition with gazetteers. In Proceedings of ACL, pages 5301– 5307. Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of EMNLP, pages 857–867. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Proceedings of NIPS, pages 4765–4774. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNsCRF. In Proceedings of ACL, pages 1064–1074. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605. Aldrian Obaja Muis and Wei Lu. 2017. Labeling gaps between words: Recognizing overlapping mentions with mention separators. In Proceedings of EMNLP, pages 2608–2618. 6458 Makoto Nagao. 1984. A framework of a mechanical translation between Japanese and English by analogy principle. Elsevier Science Publishers. Joakim Nivre, Johan Hall, and Jens Nilsson. 2004. Memory-based dependency parsing. In Proceedings of CoNLL, pages 49–56, Boston, Massachusetts, USA. Hiroki Ouchi, Hiroyuki Shindo, and Yuji Matsumoto. 2018. A span selection model for semantic role labeling. In Proceedings of EMNLP, pages 1630– 1642. Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In Proceedings of ICML, pages 1310– 1318. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP), pages 1532–1543. Gregory Plumb, Denali Molitor, and Ameet S Talwalkar. 2018. Model agnostic supervised local explanations. In Proceedings of NIPS, pages 2515– 2524. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of KDD, pages 1135–1144. Andrew M Saxe, James L McClelland, and Surya Ganguli. 2013. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120. Mohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In Proceedings of EMNLP, pages 2843–2849. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A minimal span-based neural constituency parser. In Proceedings of ACL, pages 818–827. Erik F. Tjong Kim Sang. 2002. Memory-based named entity recognition. In Proceedings of CoNLL). Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 shared task chunking. In Proceedings of CoNLL. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of CoNLL, pages 142–147. Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. In Proceedings of EMNLP, pages 204–214. Wenhui Wang and Baobao Chang. 2016. Graph-based dependency parsing with bidirectional LSTM. In Proceedings of ACL, pages 2306–2315. Sam Wiseman and Karl Stratos. 2019. Label-agnostic sequence labeling by copying nearest neighbors. In Proceedings of ACL, pages 5363–5369. Congying Xia, Chenwei Zhang, Tao Yang, Yaliang Li, Nan Du, Xian Wu, Wei Fan, Fenglong Ma, and Philip Yu. 2019. Multi-grained named entity recognition. In Proceedings of ACL, pages 1430–1440. Changmeng Zheng, Yi Cai, Jingyun Xu, Ho-fung Leung, and Guandong Xu. 2019. A boundary-aware neural model for nested named entity recognition. In Proceedings of EMNLP-IJCNLP, pages 357–366. 6459 A Appendices A.1 Experimental setup Name Value CNN window size 3 CNN filters 30 BiLSTM layers 2 BiLSTM hidden units 100 dimensions Mini-batch size 8 Optimization Adam Learning rate 0.001 Dropout ratio {0.1, 0.3, 0.5} Table 5: Hyperparameters used in the experiments. Network setup Basically, we follow the encoder architecture proposed by Ma and Hovy (2016). First, the token-encoding layer encodes each token of the input sentence wt ∈(w1, w2, . . . , wT ) to a sequence of the vector representations w1:T = (w1, w2, . . . , wT ). For the models using GloVe, we use the GloVe 100-dimensional embeddings13 (Pennington et al., 2014) and character-level CNN. For the models using BERT, we use the BERT-Base, Cased14 (Devlin et al., 2019), where we use the first subword embeddings within each token in the last layer of BERT. During training, we fix the word embeddings (except the CNN). Then, the encoded token representations w1:T = (w1, w2, . . . , wT ) are fed to bidirectional LSTM (BiLSTM) (Graves et al., 2013) for computing contextual ones −→ h 1:T and ←− h 1:T . We use 2 layers of the stacked BiLSTMs (2 forward and 2 backward LSTMs) with 100-dimensional hidden units. From −→ h 1:T and ←− h 1:T , we create hlstm s for each span s = (a, b) based on LSTM-minus (Wang and Chang, 2016). For flat NER, we use the representation hlstm s = [−→ h b −−→ h a−1, ←− h a −←− h b+1]. For nested NER, we use hlstm s = [−→ h b −−→ h a−1, ←− h a −←− h b+1, −→ h a + −→ h b, ←− h a + ←− h b]. We then multiply hlstm s with a weight matrix W and obtain the span representation: hs = W hlstm s . Finally, we use the span representation hs for computing the label distribution in each model. For efficient computation, following Sohrab and Miwa (2018), we enumerate all possible spans in a sentence with the sizes less than or equal to the maximum span size L, i.e., each span s = (a, b) is satisfied with the condition b −a < L. We set L as 6. 13https://nlp.stanford.edu/projects/ glove/ 14https://github.com/google-research/ bert Hyperparameters Table 5 lists the hyperparameters used in the experiments. We initialize all the parameter matrices in BiLSTMs with random orthonormal matrices (Saxe et al., 2013). Other parameters are initialized following Glorot and Bengio (2010). We apply dropout (Srivastava et al., 2014) to the token-encoding layer and the input vectors of each LSTM with dropout ratio of {0.1, 0.3, 0.5}. Optimization To optimize the parameters, we use Adam (Kingma and Ba, 2014) with β1 = 0.9 and β2 = 0.999. The initial learning rate is set to η0 = 0.001. The learning rate is updated on each epoch as ηt = η0/(1 + ρt), where the decay rate is ρ = 0.05 and t is the number of epoch completed. A gradient clipping value is set to 5.0 (Pascanu et al., 2013). Parameter updates are performed in mini-batches of 8. The number of training epochs is set to 100. We save the parameters that achieve the best F1 score on each development set and evaluated them on each test set. Training the models takes less than one day on a single GPU, NVIDIA DGX-1 with Tesla V100. A.2 Feature space visualization 75 50 25 0 25 50 75 100 80 60 40 20 0 20 40 60 80 LOC MISC ORG PER (a) Classifier-based 75 50 25 0 25 50 75 80 60 40 20 0 20 40 60 80 LOC MISC ORG PER (b) Instance-based Figure 3: Visualization of entity span features computed by classifier-based and instance-based models. To better understand span representations learned by our method, we observe the feature space. Specifically, we visualize the span representations hs on the CoNLL-2003 development set. Figure 3 visualizes two-dimensional entity span representations by t-distributed Stochastic Neighbor Embedding (t-SNE) (Maaten and Hinton, 2008). Both models successfully learned feature spaces where the instances with the same label come close each other.
2020
575
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6460–6469 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6460 MIE: A Medical Information Extractor towards Medical Dialogues Yuanzhe Zhang1, Zhongtao Jiang1,2, Tao Zhang1,2, Shiwan Liu1,2, Jiarun Cao3⇤, Kang Liu1,2, Shengping Liu4 and Jun Zhao1,2 1 National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China 2 School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China 3 National Centre for Text Mining, University of Manchester, Manchester, M1 7DN, United Kingdom 4 Beijing Unisound Information Technology Co., Ltd, Beijing, 100028, China {yzzhang, zhongtao.jiang, tao.zhang, shiwan.liu, kliu, jzhao}@nlpr.ia.ac.cn [email protected], [email protected] Abstract Electronic Medical Records (EMRs) have become key components of modern medical care systems. Despite the merits of EMRs, many doctors suffer from writing them, which is time-consuming and tedious. We believe that automatically converting medical dialogues to EMRs can greatly reduce the burdens of doctors, and extracting information from medical dialogues is an essential step. To this end, we annotate online medical consultation dialogues in a window-sliding style, which is much easier than the sequential labeling annotation. We then propose a Medical Information Extractor (MIE) towards medical dialogues. MIE is able to extract mentioned symptoms, surgeries, tests, other information and their corresponding status. To tackle the particular challenges of the task, MIE uses a deep matching architecture, taking dialogue turn-interaction into account. The experimental results demonstrate MIE is a promising solution to extract medical information from doctor-patient dialogues. 1 1 Introduction With the advancement of the informatization process of the medical system, Electronic Medical Records (EMRs) are required by an increasing number of hospitals all around the world. Compared with conventional medical records, EMRs are easy to save and retrieve, which bring considerable convenience for both patients and doctors. Furthermore, EMRs allow medical researchers to investigate the implicit contents included, such as epidemiologic study and patient cohorts finding. ⇤Contribution during internship at Institute of Automation, Chinese Academy of Sciences. 1Data and codes are available at https://github. com/nlpir2020/MIE-ACL-2020. Despite the advantages, most doctors complain that writing EMRs makes them exhausted (Wachter and Goldsmith, 2018). According to the study of Sinsky et al. (2016), physicians spend nearly two hours doing administrative work for every hour of facetime with patients, and the most time-consuming aspect is inputting EMRs. We believe that automatically converting doctorpatient dialogues into EMRs can effectively remove the heavy burdens of doctors, making them more deliberate to communicate with their patients. One straightforward approach is the end-to-end learning, where more supervised data, i.e., dialogue-EMR pairs are needed. Unfortunately, such data is hard to acquire in medical domain due to the privacy policy. In this paper, We focus on extracting medical information from dialogues, which we think is an essential step for EMR generation. Extracting information from medical dialogues is an emerging research field, and there are only few previous attempts. Finley et al. (2018) proposed an approach that consists of five stages to convert a clinical conversation to EMRs, but they do not describe the detail method. Du et al. (2019) also focused on extracting information from medical dialogues, and successfully defined a new task of extracting 186 symptoms and their corresponding status. The symptoms were relatively comprehensive, but they did not concern other key information like surgeries or tests. Lin et al. (2019) collected online medical dialogues to perform symptom recognition and symptom inference, i.e., inference the status of the recognized symptoms. They also used the sequential labeling method, incorporated global attention and introduced a static symptom graph. There are two main distinctive challenges for tackling doctor-patient dialogues: a) Oral expres6461 Dialogue Window Annotated Labels Patient: Doctor, could you please tell me is it premature beat? Doctor: Yes, considering your Electrocardiogram. Do you feel palpitation or short of breath? Patient: No. Can I do radiofrequency ablation? Doctor: It is worth considering. Any discomfort in chest? Patient: I always have bouts of pain. Test: Electrocardiogram (patient-pos) Symptom: Premature beat (doctor-pos) Symptom: Cardiopalmus (patient-neg) Symptom: Dyspnea (patient-neg) Surgery: Radiofrequency ablation (doctor-pos) Symptom: Chest pain (patient-pos) Figure 1: A typical medical dialogue window and the corresponding annotated labels. “Pos” is short for “positive” and “neg” is short for “negative”. Text color and label color are aligned for clarity. All the examples in the paper are translated from Chinese. sions are much more diverse than general texts. There are many medical terms in the dialogue, but many of them are not uttered formally, which will lead to performance degradation of conventional Natural Language Processing (NLP) tools. b) Available information is scattered in various dialogue turns, thus the interaction between turns should be also considered. In order to meet these challenges, we first annotate the dialogues in a window-sliding style, as illustrated in Figure 1. Then, we propose MIE, a Medical Information Extractor constructed on a deep matching model. We believe our annotation method could put up with informal expressions, and the proposed neural matching model is able to harness the turn-interactions. We collect doctor-patient dialogues from a popular Chinese online medical consultation website, Chunyu-Doctor 2, where medical dialogues are in text format. We focus on the cardiology domain, because there are more inquiries and less tests than other departments. The annotation method considers both effectiveness and feasibility. We define four main categories, including symptoms, tests, surgeries and other information, and we further define frequent items in the categories and their corresponding status at the same time. There are two merits of our annotation method: a) the annotation is much easier than the sequential labeling manner and does not need the labelers to be medical experts; b) we can annotate the circumstances that a single label is expressed by multiple turns. We totally annotate 1,120 dialogues with 18,212 2https://www.chunyuyisheng.com segmented windows and obtain more than 40k labels. We then develop MIE constructed on a novel neural matching model. MIE model consists of four main components, namely encoder module, matching module, aggregate module and scorer module. We conduct extensive experiments, and MIE achieves a overall F-score of 69.28, which indicates our proposed approach is a promising solution for the task. To sum up, the contributions of this paper are as follows: • We propose a new dataset, annotating 1,120 doctor-patient dialogues from online consultation medical dialogues with more than 40k labels. The dataset will help the following researchers. • We propose MIE, a medical information extractor based on a novel deep matching model that can make use of the interaction between dialogue turns. • MIE achieves a promising overall F-score of 69.28, significantly surpassing several competitive baselines. 2 Related Work Extracting information from medical texts is a longterm objective for both biomedical and NLP community. For example, The 2010 i2b2 challenge provides a popular dataset still used in many recent researches (Uzuner et al., 2011). Three tasks were presented: a concept extraction task focused on the 6462 extraction of medical concepts from patient reports; an assertion classification task focused on assigning assertion types for medical problem concepts; a relation classification task focused on assigning relation types that hold between medical problems, tests, and treatments. Extracting medical information from dialogues just gets started. Finley et al. (2018) proposed a pipeline method to generate EMRs. The approach contains five steps: dialogue role labeling, Automatic Speech Recognition (ASR), knowledge extraction, structured data processing and Natural Language Generation (NLG) (Murty and Kabadi, 1987). The most important part is knowledge extraction, which uses dictionary, regular expression and other supervised machine learning methods. However, the detailed explanations are left out, which make us hard to compare with them. Du et al. (2019) aimed at generating EMRs by extracting symptoms and their status. They defined 186 symptoms and three status, i.e., experienced, not experienced and other. They proposed two models to tackle the problem. Span-Attribute Tagging Model first predicted the span of a symptom, and then used the context features to further predict the symptom name and status. The seq2seq model took k dialogue turns as input, and then directly generated the symptom name and status. They collected incredible 90k dialogues and annotated 3k of them, but the dataset is not public. The most similar work to ours is (Lin et al., 2019), which also annotated Chinese online medical dialogues. Concretely, they annotated 2,067 dialogues with the BIO (begin-in-out) schema. There are two main components, namely symptom recognition and symptom inference in their approach. The former utilized both document-level and corpus-level attention enhanced Conditional Random Field (CRF) to acquire symptoms. The letter serves determining the symptom status. Our work differs from (Du et al., 2019) and (Lin et al., 2019) mainly in the following two points: a) we only extract 45 symptom items, but the status are more detailed, furthermore, we extract surgeries, tests and other information; b) we use different extracting method. Since the annotation system is different, our approach does not need the sequential labeling, which relieves the labeling work. 3 Corpus Description 3.1 Annotation Method We collect doctor-patient dialogues from a Chinese medical consultation website, Chunyu-Doctor. The dialogues are already in text format. We select cardiology topic consultations, since there are more inquiries, while dialogues of other topics often depend more on tests. A typical consultation dialogue is illustrated in Figure 1. The principle of the annotation is to label useful information as comprehensive as possible. A commonly utilized annotation paradigm is sequential labeling, where the medical entities are labeled using BIO tags (Du et al., 2019; Lin et al., 2019; Collobert et al., 2011; Huang et al., 2015; Ma and Hovy, 2016). However, such annotation methods cannot label information that a) expressed by multiple turns and b) not explicitly or not consecutively expressed. Such situations are not rare in spoken dialogues, as can be seen in Figure 1. To this end, we use a window-to-information annotation method instead of sequential labeling. As listed in Table 1, we define four main categories, and for each category, we further define frequent items. The item quantity of symptom, surgery, test and other info is 45, 4, 16 and 6, respectively. In medical dialogues, status is quite Category Item Status Symptom Backache Perspiration Hiccups Nausea Cyanosis Fever Fatigue Abdominal discomfort ... patient-positive (appear) patient-negative (absent) doctor-positive (diagnosed) doctor-negative (exclude) unknown Surgery Interventional treatment Radiofrequency ablation Heart bypass surgery Stent implantation patient-positive (done) patient-negative (not done) doctor-positive(suggest) doctor-negative (deprecated) unknown Test B-mode ultrasonography CT examination CT angiography CDFI Blood pressure measurement Ultrasonography MRI Thyroid function test Treadmill test ... patient-positive(done) patient-negative (not done) doctor-positive(suggest) doctor-negative (deprecated) unknown Other info Sleep Diet Mental condition Defecation Smoking Drinking patient-positive (normal) patient-negative (abnormal) unknown Table 1: The detailed annotation labels of the dataset. 6463 crucial that cannot be ignored. For example, for a symptom, the status of appearance or absence is opposite for a particular diagnose. So it is necessary to carefully define status for each category. The status options vary with different categories, but we use unified labels for clarity. The exact meanings of the labels are also explained in Table 1. The goal of annotation is to label all the predefined information mentioned in the current dialogue. As the dialogues turn to be too long, it is difficult for giving accurate labels when finishing reading them. Thus, we divide the dialogues into pieces using a sliding window. A window consists of multiple consecutive turns of the dialogue. It is worth noting that the window-sliding annotations can be converted into dialogue-based ones like dialogue state tracking task (Mrkˇsi´c et al., 2017), the later annotation state will overwrite the old one. Here, the sliding window size is set to 5 as Du et al. (2019) did, because this size allows the included dialogue turns contain proper amount of information. For windows with less than 5 utterances, we pad them at the beginning with empty strings. The sliding step is set to 1. We invite three graduate students to label the dialogue windows. The annotators are guided by two physicians to ensure correctness. The segmented windows are randomly assigned to the annotators. In all, we annotate 1,120 dialogues, leading to 18,212 windows. We divide the data into train/develop/test sets of size 800/160/160 for dialogues and 12,931/2,587/2,694 for windows, respectively. In total, 46,151 labels are annotated, averaging 2.53 labels in each window, 41.21 labels in each dialogue. Note that about 12.83% of windows have no gold labels, i.e., there is no pre-defined information in those windows. The distribution of the labels is shown in Table 2. The status distribution is shown in Table 3. The annotation consistency, i.e., the cohen’s kappa coefficient (Fleiss and Cohen, 1973) of the labeled data is 0.91, which means our annotation approach is feasible and easy to follow. Dialogue Window Symptom Surgery Test Other info Train 800 12931 21420 839 8879 1363 Dev 160 2587 4254 119 1680 259 Test 160 2694 4878 264 1869 327 Total 1120 18212 30552 1222 12428 1949 Table 2: The detailed annotation statistics of the dataset. Patient-pos Patient-neg Doctor-pos Doctor-neg Unknown Symptom 15119 1782 1655 910 11086 Surgery 169 48 698 10 297 Test 5589 303 4443 44 2049 Other info 550 1399 1505 Table 3: The distribution of status over all labels. 3.2 Evaluation Metrics We evaluate the extracted medical information results as ordinary information extraction task does, i.e., Precision, Recall and F-measure. To further discover the model behavior, we set up three evaluation metrics from easy to hard. Category performance is the most tolerant metric. It merely considers the correctness of the category. Item performance examines the correctness of both category and item, regardless of status. Full performance is the most strict metric, meaning that category, item and the corresponding status must be completely correct. We will report both window-level and dialoguelevel results. Window-level: We evaluate the results of each segmented window, and report the micro-average of all the test windows. Some windows have no gold labels, if the prediction on a window with no gold labels is also empty, it means the model performs well, so we set the Precision, Recall and F-measure to 1, otherwise 0. Dialogue-level: First we merge the results of the windows that belong to the same dialogue. For labels that are mutually exclusive, we update the old labels with the latest ones. Then we evaluate the results of each dialogue, and finally report the micro-average of all the test dialogues. 4 Our Approach In this section, we will elaborate the proposed MIE model, a novel deep matching neural network model. Deep matching models are widely used in multiple natural language processing tasks such as machine reading comprehension (Seo et al., 2017; Yu et al.), question answering (Yang et al., 2016) and dialogue generation (Zhou et al., 2018; Wu et al., 2017). Compared with classification models, matching models are able to introduce more information of the candidate side and promote interaction between both ends. The architecture of MIE is shown in Figure 2. There are four main components, namely encoder module, matching module, aggregate module and scorer module. The input of MIE is a doctor-patient 6464 I can't breathe out. It seems that there is phlegm in my throat. Has cardiac ultrasound been done? No, what medicine should I take for myocarditis? Do you have breathing difficulties and diagnosed myocarditis now? I have difficulty in breathing occasionally. Scorer Module Category Item Status Symptom Chest Pain Doctor-pos Test Ultrasonic Patient-neg Surgery PCI Doctor-pos ... ... ... Category Encoder Status Encoder Candidate Encoder Matching Module Candidate scores ... Medical Dialogue Candidates 𝐻௖௨௧௧ሺ1ሻ 𝐻௦௨௧௧ሺ1ሻ 𝐻௖௨௧௧ሺ2ሻ 𝐻௦௨௧௧ሺ2ሻ ... 𝐻௖௨௧௧ሺ𝑛ሻ 𝐻௦௨௧௧ሺ𝑛ሻ 𝐶௖௖௔𝑛 𝐶௦௖௔𝑛 Category Encoder Status Encoder Utterance Encoder Aggregate Module 𝑞௖ሺ1ሻ ... 𝑞௦ሺ1ሻ 𝑞௖ሺ2ሻ 𝑞௦ሺ2ሻ 𝑞௖ሺ𝑛ሻ 𝑞௦ሺ𝑛ሻ 𝑓ሺ1ሻ𝑓ሺ2ሻ... 𝑓ሺ𝑚ሻ ොy Figure 2: The architecture of MIE model. dialogue window, and the output is the predicted medical information. Encoder Module The encoder is implemented by Bi-LSTM (Hochreiter and Schmidhuber, 1997) with self-attention (Vaswani et al., 2017). Let the input utterance be X = (x1, x2, ..., xl), the encoder works as follows: H = BiLSTM(X) a[j] = WH[j] + b p = softmax(a) c = X j p[j]H[j] (1) We denote H, c = Encoder(X) for brevity. H consists contextual representations of every token in input sequence X, and c is a single vector that compresses the information of the entire sequence in a weighted way. We denote a window with n utterances as {U[1], ...U[n]}. For a candidate consists of category, item and status like Symptom:Heart failure (patient-positive), we split it to category-item pair Symptom:Heart failure denoted by V and status patient-positive denoted by S. To introduce more oral information, we also add item-related colloquial expressions collected during the annotation to the end of V . Having defined the basic structure of the encoder, we now build representations for utterances U in the dialogue window, and the candidate category-item pair V and its status S: Hutt c [i], cutt c [i] = Encoderutt c (U[i]) Hutt s [i], cutt s [i] = Encoderutt s (U[i]) Hcan c , ccan c = Encodercan c (V ) Hcan s , ccan s = Encodercan s (S) (2) Where the superscript utt and can represents utterance encoder and candidate encoder respectively, the subscript c and s represents category encoder and status encoder respectively, and i 2 [1, n] is the index of utterance in the dialogue window. All the candidates will be encoded in this step, but we only illustrate one in the figure and equations for brevity. Note that U, V , S is encoded with encoders differ from utterance to candidate and from category to status in order to make each encoder concentrate on one specific type (category-specific and status-specific) of information. Matching Module In this step, the category-item representation is treated as a query in attention mechanism to calculate the attention values towards original utterances. Then we can obtain the category-specific representation of utterance U[i] as qc[i]. ac[i, j] = ccan c · Hutt c [i, j] pc[i] = softmax(ac[i]) qc[i] = X j pc[i, j]Hutt c [i, j] (3) Meanwhile, the status representation is treated as another query to calculate the attention values 6465 towards original utterances. Then we can obtain the status-specific representation of utterance U[i] as qs[i]. as[i, j] = ccan s · Hutt s [i, j] ps[i] = softmax(as[i]) qs[i] = X j ps[i, j]Hutt s [i, j] (4) Where [i, j] denotes the jth word in the ith utterance. The goal of this step is to capture the most relevant information from each utterance given a candidate. For example, if the category-item pair of the candidate is Symptom: Heart failure, the model will assign high attention values to the mentions of heart failure in utterances. If the status of the candidate is patient-positive, the attention values of expressions like “I have”, “I’ve been diagnosed” will be high. So the matching module is important to determine the existence of a category-item pair and status related expressions. Aggregate Module The matching module introduced above have captured the information of the existence of categoryitem pairs and status. To know whether a candidate is expressed in a dialogue window, we need to obtain the category-item pair information and its status information together. In particular, we need to match every category-item representation qc[i] with qs[i]. Sometimes the category-item pair information and its status information appear in the same utterance. But sometimes, they will appear in different utterances. For example, many question-answer pairs are adjacent utterances. So we need take the interactions between utterances into account. Based on this intuition, we define two kinds of strategies to get two different models. MIE-single: The first strategy assumes that the category-item pair information and its status information appear in the same utterance. The representation of the candidate in the ith utterance is a simple concatenation of qc[i] and qs[i]: f[i] = concat(qc[i], qs[i]) (5) Where f[i] consists information of categoryitem pair and its status which can be used to predict the score of the related candidate. The model only considers the interaction within a single utterance. The acquired representations are independent from each other. This model is called MIE-single. MIE-multi: The second strategy considers the interaction between the utterances. To obtain the related status information of other utterances, we treat qc[i] as a query to get the attention values towards the representations of status, i.e., qs. Then we can obtain the candidate representation of the utterance: a[i, k] = qc[i]T Wqs[k] p[i] = softmax(a[i]) eqs[i] = X k p[i, k]qs[k] f[i] = concat(qc[i], eqs[i]) (6) Where W is a learned parameter, and eqs is the new representation of the status, containing the relative information of other utterances. The utterance order is an important clue in a dialogue window. For example, the category-item pair information can hardly related to status information whose utterance is too far. In order to capture this kind of information, we also take utterance position into account. Concretely, we add positional encoding (Vaswani et al., 2017) to each qc and qs at the beginning. We denote this model as MIE-multi. The output of the aggregate module contains the information of a entire candidate, including category-item and status information. Scorer Module The output of the aggregate module is fed into a scorer module. We use each utterance’s feature f[i] to score the candidate, as it is already the candidatespecific representation. The highest score of all the utterances in the window is the candidate’s final score: sutt[i] = feedforward(f[i]) y = sigmoid(max(sutt[i])) (7) Where feedforward is a 4 layer full-connection neural network. Learning The loss function is the cross entropy loss defined as follows: L = 1 KL X k X l −yk l log(byk l )+ (1 −yk l ) log(1 −byk l ) (8) 6466 The superscript k denote the index of the training sample, and l is the index of the candidate. K and L are the number of samples and candidates respectively. byk l is the true label of the training sample. Inference There could be more than one answer in a dialogue window. In the inference phase, we reserve all the candidates whose matching score is higher than the threshold of 0.5. Since the training process is performed in the window size, the inference phase should be the same situation. We also obtain the dialogue-level results by updating the results of windows as aforementioned. 5 Experiments In this section, we will conduct experiments on the proposed dataset. It is worth to note that we are not going to compare MIE with (Du et al., 2019) and (Lin et al., 2019), because a) they all employed sequential labeling methods, leading to different evaluation dimensions from ours (theirs are more strict as they must give the exact symptom positions in the original utterance), and b) their approaches were customized for sequential labeling paradigm, thus cannot be re-implemented in our dataset. 5.1 Implementation We use pretrained 300-dimensional Skip-Gram (Mikolov et al., 2013) embeddings to represent chinese characters. We use Adam (Kingma and Ba, 2015) optimizer. The size of the hidden states of both feed-forward network and Bi-LSTM is 400. We apply dropout (Srivastava et al., 2014) with 0.2 drop rate to the output of each module and the hidden states of feed-forward network for regularization. We adopt early stopping using the F1 score on the development set. 5.2 Baselines We compare MIE with several baselines. 1) Plain-Classifier. We develop a basic classifier model that uses the simplest strategy to accomplish the task. The input of the model are the utterances in the window. We concatenate all the utterances to obtain a long sequence, and encode it using a Bi-LSTM encoder, then we use self-attention to represent it as a single vector. Next, the vector is fed into a feed-forward classifier network. The output labels of the classifier consist of all the possible candidates. The encoder adopts category-specific parameters. 2) MIE-Classifier. To develop a more competitive model, we reuse MIE model architecture to implement an advanced classifier model. The difference between the classifier model and MIE is the way of obtaining qc and qs. Instead of matching, the classifier model treats cutt c and cutt s directly as qc and qs respectively. Thanks to the attention mechanism in the encoder, the classifier model can also capture the category-item pair information and the status information to some extent. To further examine the effect of turn-interaction, we develop two classifiers as we do in MIE. MIE-Classifiersingle treats each utterance independently, and the probability score of each utterance is calculated. The model uses a max-pooling operation to get the final score. MIE-Classifier-multi considers the turn-interaction as MIE-multi does. 5.3 Main Results The experimental results are shown in Table 4. From the results, we can obtain the following observations. 1) MIE-multi achieves the best F-score on both window-level and dialogue-level full evaluation metric, as we expected. The F-score reaches 66.40 and 69.28, which are considerable results in such sophisticated medical dialogues. 2) Both of the models using multi-turn interactions perform better than models solely using single utterance information, which further indicates the relations between turns play an important role in dialogues. The proposed approach can capture the interaction. As a proof, MIE-multi achieves a 2.01% F-score improvement in dialogue-level full evaluation. 3) Matching-based methods surpass classifier models in full evaluation. We think the results are rational because matching-based methods can introduce candidate representation. This also motivates us to leverage more background knowledge in the future. Note that in category and item metrics, MIE-classifiers are better at times, but they fail to correctly predict the status information. 4) Both MIE models and MIE-classifier models overwhelm Plain-Classifier model, which indicates the MIE architecture is far more effective than the basic LSTM representation concatenating method. 5) Dialogue-level performance is not always better than window-level performance in full evalua6467 Window-level Dialogue-level Model Category Item Full Category Item Full P R F1 P R F1 P R F1 P R F1 P R F1 P R F1 Plain-Classifier 67.21 63.78 64.92 60.89 49.20 53.81 53.13 49.46 50.69 93.57 89.49 90.96 83.42 73.76 77.29 61.34 52.65 56.08 MIE-Classifier-single 80.51 76.39 77.53 76.58 64.63 68.30 68.20 61.60 62.87 97.14 91.82 93.23 91.77 75.36 80.96 71.87 56.67 61.78 MIE-Classifier-multi 80.72 77.76 78.33 76.84 68.07 70.35 67.87 64.71 64.57 96.61 92.86 93.45 90.68 82.41 84.65 68.86 62.50 63.99 MIE-single 78.62 73.55 74.92 76.67 65.51 68.88 69.40 64.47 65.18 96.93 90.16 92.01 94.27 79.81 84.72 75.37 63.17 67.27 MIE-multi 80.42 76.23 77.77 77.21 66.04 69.75 70.24 64.96 66.40 98.86 91.52 92.69 95.31 82.53 86.83 76.83 64.07 69.28 Table 4: The experimental results of MIE and other baseline models. Both window-level and dialogue-level metrics are evaluated. tion. In our experiment, the classifier-based models perform better in window-level than dialogue-level in full evaluation. The possible reason is error accumulation. When the model predicts results the current window does not support, the errors will be accumulated with the processing of the next window, which will decrease the performance. 5.4 Error Analysis To further analyze the behavior of MIE-multi, we print the confusion matrix of category-item predictions, as shown in Figure 3. We denote the matrix as A, A[i][j] means the frequency of the circumstance that the true label is i while MIE-multi gives the answer j. Figure 3: Illustration of the confusion matrix of MIEmulti. Darker color means higher value. The figure in the axis is the category-item pair index of a total number of 71. Values of orange blocks are 0. We study the matrix and find that MIEmulti failed to predict Symptom:Limited mobility, Symptom:Nausea, Symptom: Cardiomyopathy, and Test: Renal function test, which are emphasized by orange blocks (A[i][i] = 0) in Figure 3. The Patient: I have atrial fibrillation, heart failure, anemia and loss my appetite. Doctor: Hello! How long did them last? Did you examine blood routine? Patient: Yes. Doctor: Is there coronary heart disease? Patient: No. (a) Patient: I have atrial fibrillation, heart failure, anemia and loss my appetite. Doctor: Hello! How long did them last? Did you examine blood routine? Patient: Yes. Doctor: Is there coronary heart disease? Patient: No. (b) Patient: I have atrial fibrillation, heart failure, anemia and loss my appetite. Doctor: Hello! How long did them last? Did you examine blood routine? Patient: Yes. Doctor: Is there coronary heart disease? Patient: No. (c) Figure 4: Case illustration of attentions: a) attention heat map of category-item pair for each utterance; b) attention heat map of status for each utterance; c) attention heat map for the fourth utterance in the window. possible reason is that they rarely appear in the training set, with frequency of 0.63%, 2.63%, 2.38% and 1.25%, respectively. The results reveal that the data sparse and uneven problems are the bottlenecks of our approach. 5.5 Case Discussion Attention Visualization In this part, we will analyze some cases to verify the effectiveness of the model with best performance, e.g. MIE-multi. Particularly, we investigate an example shown in Figure 4. To determine whether the candidate Symptom:Coronary heart disease (patient-negative) is mentioned in the window, we should focus on the interaction between the adjacent pair located in the last of the window. This adjacent pair is a question-answer pair, the category-item pair information is in the question of the doctor while the status information is in the answer of the patient. In this case, MIE6468 Patient: What is the effect of sinus arrhythmia? Doctor: Sinus arrhythmia is normal in general. Don't care about it unless you feel unwell significantly. Patient: I'm feeling unwell so much (because of the sinus arrhythmia). MIE-single symptom:sinus arrhythmia (unknown) MIE-multi symptom:sinus arrhythmia (patient-positive) Figure 5: Predictions of MIE-single and MIE-multi. The gray string is the implicit reason. single does not predict right result due to its independence between utterances, while MIE-multi manages to produce the correct result. For better understanding, we utilize visualization for matching module and aggregate module. Figure 4(a) is the attention heat map when the categoryitem pair information vector ccan c matches the utterances category representations Hutt c . We can observe that the attention values of the mention of coronary heart disease are relatively high, which illustrates that the model can capture the correct category-item pair information in the window. Figure 4(b) is the attention heat map when the status information ccan s matches the utterances status representation Hutt s . The attention values of the expressions related to status such as “Yes” and “No” are high, and the expression “No” is even higher. So MIE-multi can also capture the status information in the window. We also visualize the interaction between the fourth utterance and the other utterances. In Figure 4(c), the score of the fifth utterance is the highest, which is in line with the fact that the fifth utterance is the most relevant utterance in the window. In this way the model successfully obtains the related status information for the category-item pair information in the window. In a nutshell, MIE-multi can properly capture the category-item pair and status information. The Effectiveness of Turn Interaction We demostrate a case in Figure 5 that can explicitly show the need for turn interaction, where MIE-multi shows its advancement. In this case, the label Symptom:Sinus arrhythmia (patient-positive) requires turn interaction information. Specifically, in the third utterance, the patient omits the reason that makes him sick. However, under the complete context, we can infer the reason is the sinus arrhythmia, since the patient consulted the doctor at the beginning of the window. The model need to consider the interaction between different utterances to get the conclusion. Interaction-agnostic model like MIE-single makes prediction on single utterance, and then sums them up to get the final conclusion. Consequently, it fails to handle the case when the expressions of category-item and status are separated in different utterances. As a result, MIEsingle only obtains the category-item information Symptom:Sinus arrhythmia, but the status prediction is incorrect. In contrast, MIE-multi is able to capture the interaction between different utterances and predicts the label successfully. 6 Conclusion and Future Work In this paper, we first describe a new constructed corpus for the medical information extraction task, including the annotation methods and the evaluation metrics. Then we propose MIE, a deep neural matching model tailored for the task. MIE is able to capture the interaction information between the dialogue turns. To show the advantage of MIE, we develop several competitive baselines for comparison. The experimental results indicate that MIE is a promising solution for medical information extraction towards medical dialogues. In the future, we should further leverage the internal relations in the candidate end, and try to introduce rich medical background knowledge into our work. Acknowledgment This work is supported by the National Natural Science Foundation of China (No.61533018, No.61922085, No.61906196) and the Key Research Program of the Chinese Academy of Sciences (Grant NO. ZDBS-SSW-JSC006). This work is also supported by Beijing Academy of Artificial Intelligence (BAAI2019QN0301), the Open Project of Beijing Key Laboratory of Mental Disroders (2019JSJB06) and the independent research project of National Laboratory of Pattern Recognition. References Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural Language Processing (almost) from Scratch. Journal of machine learning research, 12(Aug):2493–2537. 6469 Nan Du, Kai Chen, Anjuli Kannan, Linh Tran, Yuhui Chen, and Izhak Shafran. 2019. Extracting symptoms and their status from clinical conversations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 915– 925. Gregory Finley, Erik Edwards, Amanda Robinson, Michael Brenndoerfer, Najmeh Sadoughi, James Fone, Nico Axtmann, Mark Miller, and David Suendermann-Oeft. 2018. An automated medical scribe for documenting clinical encounters. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 11–15. Joseph L Fleiss and Jacob Cohen. 1973. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educational and psychological measurement, 33(3):613– 619. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015. Xinzhu Lin, Xiahui He, Qin Chen, Huaixiao Tou, Zhongyu Wei, and Ting Chen. 2019. Enhancing dialogue symptom diagnosis with global attention and symptom graph. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 5032–5041. Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Nikola Mrkˇsi´c, Diarmuid ´O S´eaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1777–1788. Katta G Murty and Santosh N Kabadi. 1987. Some npcomplete problems in quadratic and nonlinear programming. Mathematical programming, 39(2):117– 129. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In 5th International Conference on Learning Representations, ICLR 2017. Christine Sinsky, Lacey Colligan, Ling Li, Mirela Prgomet, Sam Reynolds, Lindsey Goeders, Johanna Westbrook, Michael Tutty, and George Blike. 2016. Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Annals of internal medicine, 165(11):753–760. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958. ¨Ozlem Uzuner, Brett R South, Shuying Shen, and Scott L DuVall. 2011. 2010 i2b2/va challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Association, 18(5):552–556. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Robert Wachter and Jeff Goldsmith. 2018. To combat physician burnout and improve care, fix the electronic health record. Harvard Business Review. Yu Wu, Wei Wu, Chen Xing, Ming Zhou, and Zhoujun Li. 2017. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 496–505. Liu Yang, Qingyao Ai, Jiafeng Guo, and W Bruce Croft. 2016. anmm: Ranking short answer texts with attention-based neural matching model. In Proceedings of the 25th ACM international on conference on information and knowledge management, pages 287–296. Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V. Le. Qanet: Combining local convolution with global self-attention for reading comprehension. In 6th International Conference on Learning Representations, ICLR 2018. Xiangyang Zhou, Lu Li, Daxiang Dong, Yi Liu, Ying Chen, Wayne Xin Zhao, Dianhai Yu, and Hua Wu. 2018. Multi-turn response selection for chatbots with deep attention matching network. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1118–1127.
2020
576
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6470–6476 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6470 Named Entity Recognition as Dependency Parsing Juntao Yu Queen Mary University London, UK [email protected] Bernd Bohnet Google Research Netherlands [email protected] Massimo Poesio Queen Mary University London, UK [email protected] Abstract Named Entity Recognition (NER) is a fundamental task in Natural Language Processing, concerned with identifying spans of text expressing references to entities. NER research is often focused on flat entities only (flat NER), ignoring the fact that entity references can be nested, as in [Bank of [China]] (Finkel and Manning, 2009). In this paper, we use ideas from graph-based dependency parsing to provide our model a global view on the input via a biaffine model (Dozat and Manning, 2017). The biaffine model scores pairs of start and end tokens in a sentence which we use to explore all spans, so that the model is able to predict named entities accurately. We show that the model works well for both nested and flat NER through evaluation on 8 corpora and achieving SoTA performance on all of them, with accuracy gains of up to 2.2 percentage points. 1 Introduction ‘Nested Entities’ are named entities containing references to other named entities as in [Bank of [China]], in which both [China] and [Bank of China] are named entities. Such nested entities are frequent in data sets like ACE 2004, ACE 2005 and GENIA (e.g., 17% of NEs in GENIA are nested (Finkel and Manning, 2009), altough the more widely used set such as CONLL 2002, 2003 and ONTONOTES only contain so called flat named entities and nested entities are ignored. The current SoTA models all adopt a neural network architecture without hand-crafted features, which makes them more adaptable to different tasks, languages and domains (Lample et al., 2016; Chiu and Nichols, 2016; Peters et al., 2018; Devlin et al., 2019; Ju et al., 2018; Sohrab and Miwa, 2018; Strakov´a et al., 2019). In this paper, we introduce a method to handle both types of NEs in one system by adopting ideas from the biaffine dependency parsing model of Dozat and Manning (2017). For dependency parsing, the system predicts a head for each token and assigns a relation to the head-child pairs. In this work, we reformulate NER as the task of identifying start and end indices, as well as assigning a category to the span defined by these pairs. Our system uses a biaffine model on top of a multi-layer BiLSTM to assign scores to all possible spans in a sentence. After that, instead of building dependency trees, we rank the candidate spans by their scores and return the top-ranked spans that comply with constraints for flat or nested NER. We evaluated our system on three nested NER benchmarks (ACE 2004, ACE 2005, GENIA) and five flat NER corpora (CONLL 2002 (Dutch, Spanish) CONLL 2003 (English, German), and ONTONOTES). The results show that our system achieved SoTA results on all three nested NER corpora, and on all five flat NER corpora with substantial gains of up to 2.2% absolute percentage points compared to the previous SoTA. We provide the code as open source1. 2 Related Work Flat Named Entity Recognition. The majority of flat NER models are based on a sequence labelling approach. Collobert et al. (2011) introduced a neural NER model that uses CNNs to encode tokens combined with a CRF layer for the classification. Many other neural systems followed this approach but used instead LSTMs to encode the input and a CRF for the prediction (Lample et al., 2016; Ma and Hovy, 2016; Chiu and Nichols, 2016). These latter models were later extended to use contextdependent embeddings such as ELMo (Peters et al., 2018). Clark et al. (2018) quite successfully used cross-view training (CVT) paired with multi-task learning. This method yields impressive gains for 1The code is available at https://github.com/ juntaoy/biaffine-ner 6471 BERT, fastText & Char Embeddings BiLSTM FFNN_Start FFNN_End Biaffine Classifier Figure 1: The network architectures of our system. a number of NLP applications including NER. Devlin et al. (2019) invented BERT, a bidirectional transformer architecture for the training of language models. BERT and its siblings provided better language models that turned again into higher scores for NER. Lample et al. (2016) cast NER as transitionbased dependency parsing using a Stack-LSTM. They compare with a LSTM-CRF model which turns out to be a very strong baseline. Their transition-based system uses two transitions (shift and reduce) to mark the named entities and handles flat NER while our system has been designed to handle both nested and flat entities. Nested Named Entity Recognition. Early work on nested NER, motivated particularly by the GENIA corpus, includes (Shen et al., 2003; Beatrice Alex and Grover, 2007; Finkel and Manning, 2009). Finkel and Manning (2009) also proposed a constituency parsing-based approach. In the last years, we saw an increasing number of neural models targeting nested NER as well. Ju et al. (2018) suggested a LSTM-CRF model to predict nested named entities. Their algorithm iteratively continues until no further entities are predicted. Lin et al. (2019) tackle the problem in two steps: they first detect the entity head, and then they infer the entity boundaries as well as the category of the named entity. Strakov´a et al. (2019) tag the nested named entity by a sequence-to-sequence model exploring combinations of context-based embeddings such as ELMo, BERT, and Flair. Zheng et al. (2019) use a boundary aware network to solve the nested NER. Similar to our work, Sohrab and Miwa (2018) enumerate exhaustively all possible spans up to a defined length by concatenating the LSTMs outputs for the start and end position and then using this to calculate a score for each span. Apart from the different network and word embedding configurations, the main difference between their model and ours is there for the use of biaffine model. Due to the biaffine model, we get a global view of the sentence while Sohrab and Miwa (2018) concatenates the output of the LSTMs of possible start and end positions up to a distinct length. Dozat and Manning (2017) demonstrated that the biaffine mapping performs significantly better than just the concatenation of pairs of LSTM outputs. 3 Methods Our model is inspired by the dependency parsing model of Dozat and Manning (2017). We use both word embeddings and character embeddings as input, and feed the output into a BiLSTM and finally to a biaffine classifier. Figure 1 shows an overview of the architecture. To encode words, we use both BERTLarge and fastText embeddings (Bojanowski et al., 2016). For BERT we follow the recipe of (Kantor and Globerson, 2019) to obtain the context dependent embeddings for a target token with 64 surrounding tokens each side. For the character-based word embeddings, we use a CNN to encode the characters of the tokens. The concatenation of the word and character-based word embeddings is feed into a BiLSTM to obtain the word representations (x). After obtaining the word representations from the BiLSTM, we apply two separate FFNNs to create different representations (hs/he) for the start/end of the spans. Using different representations for the start/end of the spans allow the system to learn to identify the start/end of the spans separately. This improves accuracy compared to the model which directly uses the outputs of the LSTM since the context of the start and end of the entity are different. Finally, we employ a biaffine model over the sentence to create a l×l×c scoring tensor (rm), where l is the length of the sentence and c is the number of NER categories + 1(for non-entity). We compute the score for a span i by: hs(i) = FFNNs(xsi) he(i) = FFNNe(xei) rm(i) = hs(i)⊤Umhe(i) + Wm(hs(i) ⊕he(i)) + bm 6472 where si and ei are the start and end indices of the span i, Um is a d × c × d tensor, Wm is a 2d × c matrix and bm is the bias. The tensor rm provides scores for all possible spans that could constitute a named entity under the constrain that si ≤ei (the start of entity is before its end). We assign each span a NER category y′: y′(i) = arg max rm(i) We then rank all the spans that have a category other than ”non-entity” by their category scores (rm(iy′)) in descending order and apply following post-processing constraints: For nested NER, a entity is selected as long as it does not clash the boundaries of higher ranked entities. We denote a entity i to clash boundaries with another entity j if si < sj ≤ei < ej or sj < si ≤ej < ei, e.g. in the Bank of China, the entity the Bank of clashes boundary with the entity Bank of China, hence only the span with the higher category score will be selected. For flat NER, we apply one more constraint, in which any entity containing or is inside an entity ranked before it will not be selected. The learning objective of our named entity recognizer is to assign a correct category (including the non-entity) to each valid span. Hence it is a multi-class classification problem and we optimise our models with softmax cross-entropy: pm(ic) = exp(rm(ic)) PC ˆc=1 exp(rm(iˆc)) loss = − N X i=1 C X c=1 yic log pm(ic) 4 Experiments Data Set. We evaluate our system on both nested and flat NER, for the nested NER task, we use the ACE 20042, ACE 20053, and GENIA (Kim et al., 2003) corpora; for flat NER, we test our system on the CONLL 2002 (Tjong Kim Sang, 2002), CONLL 2003 (Tjong Kim Sang and De Meulder, 2003) and ONTONOTES4 corpora. For ACE 2004, ACE 2005 we follow the same settings of Lu and Roth (2015) and Muis and Lu (2017) to split the data into 80%,10%,10% for train, development and test set respectively. To make a 2https://catalog.ldc.upenn.edu/LDC2005T09 3https://catalog.ldc.upenn.edu/LDC2006T06 4https://catalog.ldc.upenn.edu/LDC2013T19 Parameter Value BiLSTM size 200 BiLSTM layer 3 BiLSTM dropout 0.4 FFNN size 150 FFNN dropout 0.2 BERT size 1024 BERT layer last 4 fastText embedding size 300 Char CNN size 50 Char CNN filter widths [3,4,5] Char embedding size 8 Embeddings dropout 0.5 Optimiser Adam learning rate 1e-3 Table 1: Major hyperparameters for our models. fair comparson we also used the same documents as in Lu and Roth (2015) for each split. For GENIA, we use the GENIA v3.0.2 corpus. We preprocess the dataset following the same settings of Finkel and Manning (2009) and Lu and Roth (2015) and use 90%/10% train/test split. For this evaluation, since we do not have a development set, we train our system on 50 epochs and evaluate on the final model. For CONLL 2002 and CONLL 2003, we evaluate on all four languages (English, German, Dutch and Spanish). We follow Lample et al. (2016) to train our system on the concatenation of the train and development set. For ONTONOTES, we evaluate on the English corpus and follow Strubell et al. (2017) to use the same train, development and test split as used in CoNLL 2012 shared task for coreference resolution (Pradhan et al., 2012). Evaluation Metric. We report recall, precision and F1 scores for all evaluations. The named entity is considered correct when both boundary and category are predicted correctly. Hyperparameters We use a unified setting for all of the experiments, Table 1 shows hyperparameters for our system. 5In Sohrab and Miwa (2018), the last 10% of the training set is used as a development set, we include their result mainly because their system is similar to ours. 6The revised version is provided by the shared task organiser in 2006 with more consistent annotations. We confirmed with the author of Akbik et al. (2018) that they used the revised version. 6473 Model P R F1 ACE 2004 Katiyar and Cardie (2018) 73.6 71.8 72.7 Wang et al. (2018) 73.3 Wang and Lu (2018) 78.0 72.4 75.1 Strakov´a et al. (2019) 84.4 Luan et al. (2019) 84.7 Our model 87.3 86.0 86.7 ACE 2005 Katiyar and Cardie (2018) 70.6 70.4 70.5 Wang et al. (2018) 73.0 Wang and Lu (2018) 76.8 72.3 74.5 Lin et al. (2019) 76.2 73.6 74.9 Fisher and Vlachos (2019) 82.7 82.1 82.4 Luan et al. (2019) 82.9 Strakov´a et al. (2019) 84.3 Our model 85.2 85.6 85.4 GENIA Katiyar and Cardie (2018) 79.8 68.2 73.6 Wang et al. (2018) 73.9 Ju et al. (2018) 78.5 71.3 74.7 Wang and Lu (2018) 77.0 73.3 75.1 Sohrab and Miwa (2018)5 93.2 64.0 77.1 Lin et al. (2019) 75.8 73.9 74.8 Luan et al. (2019) 76.2 Strakov´a et al. (2019) 78.3 Our model 81.8 79.3 80.5 Table 2: State of the art comparison on ACE 2004, ACE 2005 and GENIA corpora for nested NER. 5 Results on Nested NER Using the constraints for nested NER, we first evaluate our system on nested named entity corpora: ACE 2004, ACE 2005 and GENIA. Table 2 shows the results. Both ACE 2004 and ACE 2005 contain 7 NER categories and have a relatively high ratio of nested entities (about 1/3 of then named entities are nested). Our results outperform the previous SoTA system by 2% (ACE 2004) and 1.1% (ACE 2005), respectively. GENIA differs from ACE 2004 and ACE 2005 and uses five medical categories such as DNA or RNA. For the GENIA corpus our system achieved an F1 score of 80.5% and improved the SoTA by 2.2% absolute. Our hypothesise is that for GENIA the high accuracy gain is due to our structural prediction approach and that sequence-tosequence models rely more on the language model Model P R F1 ONTONOTES Chiu and Nichols (2016) 86.0 86.5 86.3 Strubell et al. (2017) 86.8 Clark et al. (2018) 88.8 Fisher and Vlachos (2019) 89.2 Our model 91.1 91.5 91.3 CONLL 2003 English Chiu and Nichols (2016) 91.4 91.9 91.6 Lample et al. (2016) 90.9 Strubell et al. (2017) 90.7 Devlin et al. (2019) 92.8 Strakov´a et al. (2019) 93.4 Our model 93.7 93.3 93.5 CONLL 2003 German Lample et al. (2016) 78.8 Strakov´a et al. (2019) 85.1 Our model 88.3 84.6 86.4 CONLL 2003 German revised6 Akbik et al. (2018) 88.3 Our model 92.4 88.2 90.3 CONLL 2002 Spanish Lample et al. (2016) 85.8 Strakov´a et al. (2019) 88.8 Our model 90.6 90.0 90.3 CONLL 2002 Dutch Lample et al. (2016) 81.7 Akbik et al. (2019) 90.4 Strakov´a et al. (2019) 92.7 Our model 94.5 92.8 93.7 Table 3: State of the art comparison on CONLL 2002, CONLL 2003, ONTONOTES corpora for flat NER. embeddings which are less informative for categories such as DNA, RNA. Our system achieved SoTA results on all three corpora for nested NER and demonstrates well the advantages of a structural prediction over sequence labelling approach. 6 Results on Flat NER We evaluate our system on five corpora for flat NER (CONLL 2002 (Dutch, Spanish), CONLL 2003 (English, German) and ONTONOTES. Unlike most of the systems that treat flat NER as a sequence labelling task, our system predicts named entities by considering all possible spans and ranking them. The ONTONOTES corpus consists of documents form 7 different domains and is annotated with 18 6474 F1 ∆ Our model 89.9 - biaffine 89.1 0.8 - BERT emb 87.5 2.4 - fastText emb 89.5 0.4 - Char emb 89.8 0.1 Table 4: The comparison between our full model and ablated models on ONTONOTES development set. fine-grained named entity categories. To predict named entities for this corpus is more difficult than for CONLL 2002 and CONLL 2003. These corpora use coarse-grained named entity categories (only 4 categories). The sequence-to-sequence models usually perform better on the CONLL 2003 English corpus (see Table 3), e.g. the system of Chiu and Nichols (2016); Strubell et al. (2017). In contrast, our system is less sensitive to the domain and the granularity of the categories. As shown in Table 3, our system achieved an F1 score of 91.3% on the ONTONOTES corpus and is very close to our system performance on the CONLL 2003 corpus (93.5%). On the multi-lingual data, our system achieved F1 scores of 86.4% for German, 90.3% for Spanish and 93.5% for Dutch. Our system outperforms the previous SoTA results by large margin of 2.1%, 1.5%, 1.3% and 1% on ONTONOTES, Spanish, German and Dutch corpora respectively and is slightly better than the SoTA on English data set. In addition, we also tested our system on the revised version of German data to compare with the model by Akbik et al. (2018), our system again achieved a substantial gain of 2% when compared with their system. 7 Ablation Study To evaluate the contribution of individual components of our system, we further remove selected components and use ONTONOTES for evaluation (see Table 4). We choose ONTONOTES for our ablation study as it is the largest corpus. Biaffine Classifier We replace the biaffine mapping with a CRF layer and convert our system into a sequence labelling model. The CRF layer is frequently used in models for flat NER, e.g. (Lample et al., 2016). When we replace the biaffine model of our system with a CRF layer, the performance drops by 0.8 percentage points (Table 4). The large performance difference shows the benefit of adding a biaffine model and confirms our hypothesis that the dependency parsing framework is an important factor for the high accuracy of our system. Contextual Embeddings We ablate BERT embeddings and as expected, after removing BERT embeddings, the system performance drops by a large number of 2.4 percentage points (see Table 4). This shows that BERT embeddings are one of the most important factors for the accuracy. Context Independent Embeddings We remove the context-independent fastText embedding from our system. The context-independent embedding contributes 0.4% towards the score of our full system (Table 4). Which suggests that even with the BERT embeddings enabled, the contextindependent embeddings can still make quite noticeable improvement to a system. Character Embeddings Finally, we remove the character embeddings. As we can see from Table 4, the impact of character embeddings is quite small. One explanation would be that English is not a morphologically rich language hence does not benefit largely from character-level information and the BERT embeddings itself are based on word pieces that already capture some character-level information. Overall, the biaffine mapping and the BERT embedding together contributed most to the high accuracy of our system. 8 Conclusion In this paper, we reformulate NER as a structured prediction task and adopted a SoTA dependency parsing approach for nested and flat NER. Our system uses contextual embeddings as input to a multilayer BiLSTM. We employ a biaffine model to assign scores for all spans in a sentence. Further constraints are used to predict nested or flat named entities. We evaluated our system on eight named entity corpora. The results show that our system achieves SoTA on all of the eight corpora. We demonstrate that advanced structured prediction techniques lead to substantial improvements for both nested and flat NER. Acknowledgments This research was supported in part by the DALI project, ERC Grant 695662. 6475 References Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019. Pooled contextualized embeddings for named entity recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 724–728, Minneapolis, Minnesota. Association for Computational Linguistics. Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638–1649, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Barry Haddow Beatrice Alex and Claire Grover. 2007. Recognising nested named entities in biomedical text. In Proc. of BioNLP, pages 65–72. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606. Jason PC Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics, 4:357–370. Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1914– 1925, Brussels, Belgium. Association for Computational Linguistics. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of machine learning research, 12(Aug):2493–2537. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Timothy Dozat and Christopher Manning. 2017. Deep biaffine attention for neural dependency parsing. In Proceedings of 5th International Conference on Learning Representations (ICLR). Jenny Rose Finkel and Christopher D. Manning. 2009. Nested named entity recognition. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 141–150, Singapore. Association for Computational Linguistics. Joseph Fisher and Andreas Vlachos. 2019. Merge and label: A novel neural network architecture for nested NER. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5840–5850, Florence, Italy. Association for Computational Linguistics. Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1446–1459, New Orleans, Louisiana. Association for Computational Linguistics. Ben Kantor and Amir Globerson. 2019. Coreference resolution with entity equalization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 673–677, Florence, Italy. Association for Computational Linguistics. Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 861–871, New Orleans, Louisiana. Association for Computational Linguistics. J.-D. Kim, T. Ohta, Y. Tateisi, and J. Tsujii. 2003. GENIA corpus—a semantically annotated corpus for bio-textmining. Bioinformatics, 19(suppl1) : i180− −i182. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260– 270. Association for Computational Linguistics. Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2019. Sequence-to-nuggets: Nested entity mention detection via anchor-region networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5182–5192, Florence, Italy. Association for Computational Linguistics. Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 857–867, Lisbon, Portugal. Association for Computational Linguistics. Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3036–3046, Minneapolis, Minnesota. Association for Computational Linguistics. 6476 Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064–1074, Berlin, Germany. Association for Computational Linguistics. Aldrian Obaja Muis and Wei Lu. 2017. Labeling gaps between words: Recognizing overlapping mentions with mention separators. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2608–2618, Copenhagen, Denmark. Association for Computational Linguistics. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke S. Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. In Proceedings of the Sixteenth Conference on Computational Natural Language Learning (CoNLL 2012), Jeju, Korea. Dan Shen, Jie Zhang, Guodong Zhou, Jian Su, and ChewLim Tan. 2003. Effective adaptation of a Hidden Markov Model-based Named Entity Recognizer for the biomedical domain. In Proceedings of the ACL 2003 Workshop on Natural Language Processing in Biomedicine. Mohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2843–2849, Brussels, Belgium. Association for Computational Linguistics. Jana Strakov´a, Milan Straka, and Jan Hajic. 2019. Neural architectures for nested NER through linearization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326–5331, Florence, Italy. Association for Computational Linguistics. Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2670–2680, Copenhagen, Denmark. Association for Computational Linguistics. Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002). Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Languageindependent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–147. Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 204–214, Brussels, Belgium. Association for Computational Linguistics. Bailin Wang, Wei Lu, Yu Wang, and Hongxia Jin. 2018. A neural transition-based model for nested mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1011–1017, Brussels, Belgium. Association for Computational Linguistics. Changmeng Zheng, Yi Cai, Jingyun Xu, Ho-fung Leung, and Guandong Xu. 2019. A boundary-aware neural model for nested named entity recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 357–366, Hong Kong, China. Association for Computational Linguistics.
2020
577
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6477–6487 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6477 Neighborhood Matching Network for Entity Alignment Yuting Wu1, Xiao Liu1, Yansong Feng1,2∗, Zheng Wang3 and Dongyan Zhao1,2 1Wangxuan Institute of Computer Technology, Peking University, China 2The MOE Key Laboratory of Computational Linguistics, Peking University, China 3School of Computing, University of Leeds, U.K. {wyting,lxlisa,fengyansong,zhaodongyan}@pku.edu.cn [email protected] Abstract Structural heterogeneity between knowledge graphs is an outstanding challenge for entity alignment. This paper presents Neighborhood Matching Network (NMN), a novel entity alignment framework for tackling the structural heterogeneity challenge. NMN estimates the similarities between entities to capture both the topological structure and the neighborhood difference. It provides two innovative components for better learning representations for entity alignment. It first uses a novel graph sampling method to distill a discriminative neighborhood for each entity. It then adopts a cross-graph neighborhood matching module to jointly encode the neighborhood difference for a given entity pair. Such strategies allow NMN to effectively construct matchingoriented entity representations while ignoring noisy neighbors that have a negative impact on the alignment task. Extensive experiments performed on three entity alignment datasets show that NMN can well estimate the neighborhood similarity in more tough cases and significantly outperforms 12 previous state-ofthe-art methods. 1 Introduction By aligning entities from different knowledge graphs (KGs) to the same real-world identity, entity alignment is a powerful technique for knowledge integration. Unfortunately, entity alignment is nontrivial because real-life KGs are often incomplete and different KGs typically have heterogeneous schemas. Consequently, equivalent entities from two KGs could have distinct surface forms or dissimilar neighborhood structures. In recent years, embedding-based methods have become the dominated approach for entity alignment (Zhu et al., 2017; Pei et al., 2019a; Cao et al., 2019; Xu et al., 2019; Li et al., 2019a; Sun et al., ∗Corresponding author. 布鲁克林区 纽约 美国 子行政区 子行政区 纽约州 子行政区 最大城市 最大城市 Brooklyn Queens southwest New York City subdivision subdivision New York state Manhattan south Dan Houser residence northwest . . . birthPlace David Julius KG1: 布鲁克林区 # neighbors: 3 # common neighbors: 2 KG2: Brooklyn # neighbors: 21 # common neighbors: 2 Case 1: equivalent entities with different sizes of neighborhoods (a) 利物浦 英格兰 英国 子行政区 国家 利物浦大学 城市 Liverpool Labour Party (UK) executive England constituent country United Kingdom sovereign state . . . birthPlace George Harrison KG1: 利物浦 # neighbors: 20 # common neighbors: 3 KG2: Liverpool # neighbors: 22 # common neighbors: 3 Case 2: most common neighbors are not discriminative for alignment (b) 工党(英国) 行政 . . . Figure 1: Illustrative examples: two tough cases for entity alignment. Dashed rectangles denote the common neighbors between different KGs. 2020). Such approaches have the advantage of not relying on manually constructed features or rules (Mahdisoltani et al., 2015). Using a set of seed alignments, an embedding-based method models the KG structures to automatically learn how to map the equivalent entities among different KGs into a unified vector space where entity alignment can be performed by measuring the distance between the embeddings of two entities. The vast majority of prior works in this direction build upon an important assumption - entities and their counterparts from other KGs have similar neighborhood structures, and therefore, similar embeddings will be generated for equivalent entities. Unfortunately, the assumption does not always hold for real-life scenarios due to the incompleteness and heterogeneities of KGs. As an example, consider Figure 1 (a), which shows two equivalent entities from the Chinese and English versions of Wikipedia. Here, both central entities refer to the same real-world identity, Brooklyn, a borough of New York City. However, the two entities have different sizes of neighborhoods and 6478 distinct topological structures. The problem of dissimilar neighborhoods between equivalent entities is ubiquitous. Sun et al. (2020) reports that the majority of equivalent entity pairs have different neighbors in the benchmark datasets DBP15K, and the proportions of such entity pairs are over 86% (up to 90%) in different language versions of DBP15K. Particularly, we find that the alignment accuracy of existing embedding-based methods decreases significantly as the gap of equivalent entities’ neighborhood sizes increases. For instance, RDGCN (Wu et al., 2019a), a state-of-the-art, delivers an accuracy of 59% on the Hits@1 score on entity pairs whose number of neighbors differs by no more than 10 on DBP15KZH−EN. However, its performance drops to 42% when the difference for the number of neighbors increases to 20 and to 35% when the difference increases to be above 30. The disparity of the neighborhood size and topological structures pose a significant challenge for entity alignment methods. Even if we were able to set aside the difference in the neighborhood size, we still have another issue. Since most of the common neighbors would be popular entities, they will be neighbors of many other entities. As a result, it is still challenging to align such entities. To elaborate on this point, let us now consider Figure 1 (b). Here, the two central entities (both indicate the city Liverpool) have similar sizes of neighborhoods and three common neighbors. However, the three common neighbors (indicate United Kingdom, England and Labour Party (UK), respectively) are not discriminative enough. This is because there are many city entities for England which also have the three entities in their neighborhoods – e.g., the entity Birmingham. For such entity pairs, in addition to common neighbors, other informative neighbors – like those closely contextually related to the central entities – must be considered. Because existing embedding-based methods are unable to choose the right neighbors, we need a better approach. We present Neighborhood Matching Network (NMN), a novel sampling-based entity alignment framework. NMN aims to capture the most informative neighbors and accurately estimate the similarities of neighborhoods between entities in different KGs. NMN achieves these by leveraging the recent development in Graph Neural Networks (GNNs). It first utilizes the Graph Convolutional Networks (GCNs) (Kipf and Welling, 2017) to model the topological connection information, and then selectively samples each entity’s neighborhood, aiming at retaining the most informative neighbors towards entity alignment. One of the key challenges here is how to accurately estimate the similarity of any two entities’ sampled neighborhood. NMN addresses this challenge by designing a discriminative neighbor matching module to jointly compute the neighbor differences between the sampled subgraph pairs through a cross-graph attention mechanism. Note that we mainly focus on the neighbor relevance in the neighborhood sampling and matching modules, while the neighbor connections are modeled by GCNs. We show that, by integrating the neighbor connection information and the neighbor relevance information, NMN can effectively align entities from real-world KGs with neighborhood heterogeneity. We evaluate NMN by applying it to benchmark datasets DBP15K (Sun et al., 2017) and DWY100K (Sun et al., 2018), and a sparse variant of DBP15K. Experimental results show that NMN achieves the best and more robust performance over state-ofthe-arts. This paper makes the following technical contributions. It is the first to: • employ a new graph sampling strategy for identifying the most informative neighbors towards entity alignment (Sec. 3.3). • exploit a cross-graph attention-based matching mechanism to jointly compare discriminative subgraphs of two entities for robust entity alignment (Sec. 3.4). 2 Related Work Embedding-based entity alignment. In recent years, embedding-based methods have emerged as viable means for entity alignment. Early works in the area utilize TransE (Bordes et al., 2013) to embed KG structures, including MTransE (Chen et al., 2017), JAPE (Sun et al., 2017), IPTransE (Zhu et al., 2017), BootEA (Sun et al., 2018), NAEA (Zhu et al., 2019) and OTEA (Pei et al., 2019b). Some more recent studies use GNNs to model the structures of KGs, including GCN-Align (Wang et al., 2018), GMNN (Xu et al., 2019), RDGCN (Wu et al., 2019a), AVR-GCN (Ye et al., 2019), and HGCN-JE (Wu et al., 2019b). Besides the structural information, some recent methods like KDCoE (Chen et al., 2018), AttrE (Trisedya et al., 2019), MultiKE (Zhang et al., 2019) and HMAN 6479 (Yang et al., 2019) also utilize additional information like Wikipedia entity descriptions and attributes to improve entity representations. However, all the aforementioned methods ignore the neighborhood heterogeneity of KGs. MuGNN (Cao et al., 2019) and AliNet (Sun et al., 2020) are two most recent efforts for addressing this issue. While promising, both models still have drawbacks. MuGNN requires both pre-aligned entities and relations as training data, which can have expensive overhead for training data labeling. AliNet considers all one-hop neighbors of an entity to be equally important when aggregating information. However, not all one-hop neighbors contribute positively to characterizing the target entity. Thus, considering all of them without careful selection can introduce noise and degrade the performance. NMN avoids these pitfalls. With only a small set of pre-aligned entities as training data, NMN chooses the most informative neighbors for entity alignment. Graph neural networks. GNNs have recently been employed for various NLP tasks like semantic role labeling (Marcheggiani and Titov, 2017) and machine translation (Bastings et al., 2017). GNNs learn node representations by recursively aggregating the representations of neighboring nodes. There are a range of GNN variants, including the Graph Convolutional Network (GCN) (Kipf and Welling, 2017), the Relational Graph Convolutional Network (Schlichtkrull et al., 2018), the Graph Attention Network (Veliˇckovi´c et al., 2018). Giving the powerful capability for modeling graph structures, we also leverage GNNs to encode the structural information of KGs (Sec. 3.2). Graph matching. The similarity of two graphs can be measured by exact matching (graph isomorphism) (Yan et al., 2004) or through structural information like the graph editing distance (Raymond et al., 2002). Most recently, the Graph Matching Network (GMN) (Li et al., 2019b) computes a similarity score between two graphs by jointly reasoning on the graph pair through cross-graph attention-based matching. Inspired by GMN, we design a cross-graph neighborhood matching module (Sec. 3.4) to capture the neighbor differences between two entities’ neighborhoods. Graph sampling. This technique samples a subset of vertices or edges from the original graph. Some of the popular sampling approaches include vertex-, edge- and traversal-based sampling (Hu and Lau, 2013). In our entity alignment framework, we propose a vertex sampling method to select informative neighbors and to construct a neighborhood subgraph for each entity. 3 Our Approach Formally, we represent a KG as G = (E, R, T), where E, R, T denote the sets of entities, relations and triples respectively. Without loss of generality, we consider the task of entity alignment between two KGs, G1 and G2, based on a set of pre-aligned equivalent entities. The goal is to find pairs of equivalent entities between G1 and G2. 3.1 Overview of NMN As highlighted in Sec. 1, the neighborhood heterogeneity and noisy common neighbors of real-world KGs make it difficult to capture useful information for entity alignment. To tackle these challenges, NMN first leverages GCNs to model the neighborhood topology information. Next, it employs neighborhood sampling to select the more informative neighbors. Then, it utilizes a cross-graph matching module to capture neighbor differences. As depicted in Figure 2, NMN takes as input two KGs, G1 and G2, and produces embeddings for each candidate pair of entities, e1 and e2, so that entity alignment can be performed by measuring the distance, d(e1, e2), of the learned embeddings. It follows a four-stage processing pipeline: (1) KG structure embedding, (2) neighborhood sampling, (3) neighborhood matching, and (4) neighborhood aggregation for generating embeddings. 3.2 KG Structure Embedding To learn the KG structure embeddings, NMN utilizes multi-layered GCNs to aggregate higher degree neighboring structural information for entities. NMNs uses pre-trained word embeddings to initialize the GCN. This strategy is shown to be effective in encoding the semantic information of entity names in prior work (Xu et al., 2019; Wu et al., 2019a). Formally, let G1 = (E1, R1, T1) and G2 = (E2, R2, T2) be two KGs to be aligned, we put G1 and G2 together as one big input graph to NMN. Each GCN layer takes a set of node features as input and updates the node representations as: h(l) i = ReLU( X j∈Ni∪{i} 1 ϵi W(l)h(l−1) j ) (1) 6480 G1 G2 GCNs d(e1, e2) KG Structure Embedding Neighborhood Sampling e1 e1 e1 e2 e2 e2 e1 e2 Neighborhood Matching Neighborhood Aggregation Neighborhood Aggregation Figure 2: Overall architecture and processing pipeline of Neighborhood Matching Network (NMN). where {h(l) 1 , h(l) 2 , ..., h(l) n |h(l) i ∈Rd(l)} is the output node (entity) features of l-th GCN layer, ϵi is the normalization constant, Ni is the set of neighbor indices of entity i, and W(l) ∈Rd(l)×d(l−1) is a layer-specific trainable weight matrix. To control the accumulated noise, we also introduce highway networks (Srivastava et al., 2015) to GCN layers, which can effectively control the noise propagation across GCN layers (Rahimi et al., 2018; Wu et al., 2019b). 3.3 Neighborhood Sampling The one-hop neighbors of an entity are key to determine whether the entity should be aligned with other entities. However, as we have discussed in Sec. 1, not all one-hop neighbors contribute positively for entity alignment. To choose the right neighbors, we apply a down-sampling process to select the most informative entities towards the central target entity from its one-hop neighbors. Recall that we use pre-trained word embeddings of entity names to initialize the input node features of GCNs. As a result, the entity embeddings learned by GCNs contain rich contextual information for both the neighboring structures and the entity semantics. NMN exploits such information to sample informative neighbors, i.e., neighbors that are more contextually related to the central entity are more likely to be sampled. Our key insight is that the more often a neighbor and the central (or target) entity appear in the same context, the more representative and informative the neighbor is towards the central entity. Since the contexts of two equivalent entities in real-world corpora are usually similar, the stronger a neighbor is contextually related to the target entity, the more alignment clues the neighbor is likely to offer. Experimental results in Sec. 5.3 confirm this observation. Formally, given an entity ei, the probability to sample its one-hop neighbor ei j is determined by: p(hi j|hi) = softmax(hiWshT i j) = exp(hiWshT i j) P k∈Ni exp(hiWshT i k) (2) where Ni is the one-hop neighbor index of central entity ei, hi and hi j are learned embeddings for entities ei and ei j respectively, and Ws is a shared weight matrix. By selectively sampling one-hop neighbors, NMN essentially constructs a discriminative subgraph of neighborhood for each entity, which can enable more accurate alignment through neighborhood matching. 3.4 Neighborhood Matching The neighborhood subgraph, produced by the sampling process, determines which neighbors of the target entity should be considered in the later stages. In other words, later stages of the NMN processing pipeline will only operate on neighbors within the subgraph. In the neighborhood matching stage, we wish to find out, for each candidate entity in the counterpart KG, which neighbors of that entity are closely related to a neighboring node within the subgraph of the target entity. Such information is essential for deciding whether two entities (from two KGs) should be aligned. As discussed in Sec. 3.3, equivalent entities tend to have similar contexts in real-world corpora; therefore, their neighborhoods sampled by NMN should be more likely to be similar. NMN exploits this observation to estimate the similarities of the sampled neighborhoods. Candidate selection. Intuitively, for an entity ei in E1, we need to compare its sampled neighborhood subgraph with the subgraph of each candidate entity in E2 to select an optimal alignment entity. Exhaustively trying all possible entities of E2 would be prohibitively expensive for large 6481 real-world KGs. To reduce the matching overhead, NMN takes a low-cost approximate approach. To that end, NMN first samples an alignment candidate set Ci = {ci1, ci2, ..., cit|cik ∈E2} for ei in E1, and then calculates the subgraph similarities between ei and these candidates. This is based on an observation that the entities in E2 which are closer to ei in the embedding space are more likely to be aligned with ei. Thus, for an entity ej in E2, the probability that it is sampled as a candidate for ei can be calculated as: p(hj|hi) = exp(∥hi −hj∥L1) P k∈E2 exp(∥hi −hk∥L1) (3) Cross-graph neighborhood matching. Inspired by recent works in graph matching (Li et al., 2019b), our neighbor matching module takes a pair of subgraphs as input, and computes a cross-graph matching vector for each neighbor, which measures how well this neighbor can be matched to any neighbor node in the counterpart. Formally, let (ei, cik) be an entity pair to be measured, where ei ∈E1 and cik ∈E2 is one of the candidates of ei, p and q are two neighbors of ei and cik, respectively. The cross-graph matching vector for neighbor p can be computed as: apq = exp(hp · hq) P q′∈Ns ik exp(hp · hq′) (4) mp = X q∈Ns ik apq(hp −hq) (5) where apq are the attention weights, mp is the matching vector for p, and it measures the difference between hp and its closest neighbor in the other subgraph, Ns ik is the sampled neighbor set of cik, hp and hq are the GCN-output embeddings for p and q respectively. Then, we concatenate neighbor p’s GCN-output embeddings with weighted matching vector mp: ˆhp = [hp∥β ∗mp] (6) For each target neighbor in a neighborhood subgraph, the attention mechanism in the matching module can accurately detect which of the neighbors in the subgraph of another KG is most likely to match the target neighbor. Intuitively, the matching vector mp captures the difference between the two closest neighbors. When the representations of the two neighbors are similar, the matching vector tends to be a zero vector so that their representations stay similar. When the neighbor representations differ, the matching vector will be amplified through propagation. We find this matching strategy works well for our problem settings. 3.5 Neighborhood Aggregation In the neighborhood aggregation stage, we combine the neighborhood connection information (learned at the KG structure embedding stage) as well as the output of the matching stage (Sec. 3.4) to generate the final embeddings used for alignment. Specifically, for entity ei, we first aggregate its sampled neighbor representations {ˆhp}. Inspired by the aggregation method in (Li et al., 2016), we compute a neighborhood representation for ei as: gi = ( X p∈Ns i σ(ˆhpWgate) · ˆhp)WN (7) Then, we concatenate the central entity ei’s GCN-output representation hi with its neighborhood representation to construct the matching oriented representation for ei: hmatch i = [gi∥hi] (8) 3.6 Entity Alignment and Training Pre-training. As discussed in Sec. 3.3, our neighborhood sampling is based on the GCNoutput entity embeddings. Therefore, we first pretrain the GCN-based KG embedding model to produce quality entity representations. Specifically, we measure the distance between two entities to determine whether they should be aligned: ˜d(e1, e2) = ∥he1 −he2∥L1 (9) The objective of the pre-trained model is: ˜L = X (i,j)∈L X (i′,j′)∈L′ max{0, ˜d(i, j) −˜d(i′, j′) + γ} (10) where γ > 0 is a margin hyper-parameter; L is our alignment seeds and L′ is the set of negative aligned entity pairs generated by nearest neighbor sampling (Kotnis and Nastase, 2017). Overall training objective. The pre-training phase terminates once the entity alignment performance has converged to be stable. We find that after this stage, the entity representations given by the GCN are sufficient for supporting the neighborhood sampling and matching modules. Hence, 6482 Figure 3: Distribution of difference in the size of neighborhoods of aligned entity pairs on DBP15KZH−EN. we replace the loss function of NMN after the pretraining phase as: L = X (r,t)∈L X (r′,t′)∈C max{0, d(r, t) −d(r′, t′) + γ} (11) d(r, t) = ∥hmatch r −hmatch t ∥L1 (12) where the negative alignments set C = {(r′, t′)|(r′ = r ∧t′ ∈Cr) ∨(t′ = t ∧r′ ∈Ct)} is made up of the alignment candidate sets of r and t, Cr and Ct are generated in the candidate selection stage described in Sec. 3.4. Note that our sampling process is nondifferentiable, which corrupts the training of weight matrix Ws in Eq. 2. To avoid this issue, when training Ws, instead of direct sampling, we aggregate all the neighbor information by intuitive weighted summation: gw i = ( X p∈Ni αip · σ(ˆhpWgate) · ˆhp)WN (13) where αip is the aggregation weight for neighbor p, and is the sampling probability p(hp|hi) for p given by Eq. 2. Since the aim of training Ws is to let the learned neighborhood representations of aligned entities to be as similar as possible, the objective is: Lw = X (r,t)∈L ∥gw r −gw t ∥L1 (14) In general, our model is trained end-to-end after pre-training. During training, we use Eq. 11 as the main objective function, and, every 50 epochs, we tune Ws using Eq. 14 as the objective function. 4 Experimental Setup Datasets. Follow the common practice of recent works (Sun et al., 2018; Cao et al., 2019; Sun et al., 2020), we evaluate our model on DBP15K (Sun et al., 2017) and DWY100K (Sun et al., 2018) datasets, and use the same split with previous works, 30% for training and 70% for testing. To Datasets Ent. Rel. Tri. Tri. Remain in S. ZH-EN ZH 66,469 2,830 153,929 26% EN 98,125 2,317 237,674 100% JA-EN JA 65,744 2,043 164,373 41% EN 95,680 2,096 233,319 100% FR-EN FR 66,858 1,379 192,191 45% EN 105,889 2,209 278,590 100% Table 1: Summary of DBP15K and S-DBP15k. Datasets Ent. Rel. Tri. DBP-WD DBpedia 100,000 330 463,294 Wikidata 100,000 220 448,774 DBP-YG DBpedia 100,000 302 428,952 YAGO3 100,000 31 502,563 Table 2: Summary of DWY100K. evaluate the performance of NMN in a more challenging setting, we also build a sparse dataset SDBP15K based on DBP15K. Specifically, we randomly remove a certain proportion of triples in the non-English KG to increase the difference in neighborhood size for entities in different KGs. Table 1 gives the detailed statistics of DBP15K and S-DBP15K, and the information of DWY100K is exhibited in Table 2. Figure 3 shows the distribution of difference in the size of one-hop neighborhoods of aligned entity pairs. Our source code and datasets are freely available online.1 Comparison models. We compare NMN against 12 recently proposed embedding-based alignment methods: MTransE (Chen et al., 2017), JAPE (Sun et al., 2017), IPTransE (Zhu et al., 2017), GCNAlign (Wang et al., 2018), BootEA (Sun et al., 2018), SEA (Pei et al., 2019a), RSN (Guo et al., 2019), MuGNN (Cao et al., 2019), KECG (Li et al., 2019a), AliNet (Sun et al., 2020), GMNN (Xu et al., 2019) and RDGCN (Wu et al., 2019a). The last two models also utilize entity names for alignment. Model variants. To evaluate different components of our model, we provide two implementation variants of NMN: (1) NMN (w/o nbr-m), where we replace the neighborhood matching part by taking the average of sampled neighbor representations as the neighborhood representation; and (2) NMN (w/o nbr-s), where we remove the sampling process and perform neighborhood matching on all one-hop neighbors. Implementation details. The configuration we use in the DBP15K and DWY100k datasets is: β = 0.1, γ = 1.0, and we sample 5 neigh1https://github.com/StephanieWyt/NMN 6483 bors for each entity in the neighborhood sampling stage (Sec. 3.3). For S-DBP15K, we set β to 1. We sample 3 neighbors for each entity in SDBP15KZH−EN and S-DBP15KJA−EN, and 10 neighbors in S-DBP15KFR−EN. NMN uses a 2layer GCN. The dimension of hidden representations in GCN layers described in Sec. 3.2 is 300, and the dimension of neighborhood representation gi described in Sec. 3.5 is 50. The size of the candidate set in Sec. 3.4 is 20 for each entity. The learning rate is set to 0.001. To initialize entity names, for the DBP15K datasets, we first use Google Translate to translate all non-English entity names into English, and use pre-trained English word vectors glove.840B.300d2 to construct the initial node features of KGs. For the DWY100K datasets, we directly use the pretrained word vectors to initialize the nodes. Metrics. Following convention, we use Hits@1 and Hits@10 as our evaluation metrics. A Hits@k score is computed by measuring the proportion of correctly aligned entities ranked in the top k list. A higher Hits@k score indicates better performance. 5 Experimental Results 5.1 Performance on DBP15K and DWY100K Table 3 reports the entity alignment performance of all approaches on DBP15K and DWY100K datasets. It shows that the full implementation of NMN significantly outperforms all alternative approaches. Structured-based methods. The top part of the table shows the performance of the state-of-the-art structure-based models which solely utilize structural information. Among them, BootEA delivers the best performance where it benefits from more training instances through a bootstrapping process. By considering the structural heterogeneity, MuGNN and AliNet outperform most of other structure-based counterparts, showing the importance of tackling structural heterogeneity. Entity name initialization. The middle part of Table 3 gives the results of embedding-based models that use entity name information along with structural information. Using entity names to initialize node features, the GNN-based models, GMNN and RDGCN, show a clear improvement over structure-based models, suggesting that entity 2http://nlp.stanford.edu/projects/glove/ names provide useful clues for entity alignment. In particular, GMNN achieves the highest Hits@10 on the DWY100K datasets, which are the only monolingual datasets (in English) in our experiments. We also note that, GMNN pre-screens a small candidate set for each entity based on the entity name similarity, and only traverses this candidate set during testing and calculating the Hits@k scores. NMN vs. its variants. The bottom part of Table 3 shows the performance of NMN and its variants. Our full NMN implementation substantially outperforms all baselines across nearly all metrics and datasets by accurately modeling entity neighborhoods through neighborhood sampling and matching and using entity name information. Specifically, NMN achieves the best Hits@1 score on DBP15KZH−EN, with a gain of 2.5% compared with RDGCN, and 5.4% over GMNN. Although RDGCN employs a dual relation graph to model the complex relation information, it does not address the issue of neighborhood heterogeneity. While GMNN collects all one-hop neighbors to construct a topic entity graph for each entity, its strategy might introduce noises since not all onehop neighbors are favorable for entity alignment. When comparing NMN and NMN (w/o nbr-m), we can observe around a 2.5% drop in Hits@1 and a 0.6% drop in Hits@10 on average, after removing the neighborhood matching module. Specifically, the Hits@1 scores between NMN and NMN (w/o nbr-m) differ by 3.9% on DBP15KFR−EN. These results confirm the effectiveness of our neighborhood matching module in identifying matching neighbors and estimating the neighborhood similarity. Removing the neighbor sampling module from NMN, i.e., NMN (w/o nbr-s), leads to an average performance drop of 0.3% on Hits@1 and 1% on Hits@10 on all the datasets. This result shows the important role of our sampling module in filtering irrelevant neighbors. When removing either the neighborhood matching module (NMN (w/o nbr-m)) or sampling module (NMN (w/o nbr-s)) from our main model, we see a substantially larger drop in both Hits@1 and Hits@10 on DBP15K than on DWY100K. One reason is that the heterogeneity problem in DBP15K is more severe than that in DWY100K. The average proportion of aligned entity pairs that have a different number of neighbors is 89% in DBP15K compared to 84% in DWY100K. These results show 6484 Models DBPZH-EN DBPJA-EN DBPFR-EN DBP-WD DBP-YG Hits@1 Hits@10 Hits@1 Hits@10 Hits@1 Hits@10 Hits@1 Hits@10 Hits@1 Hits@10 MTransE (Chen et al., 2017) 30.8 61.4 27.9 57.5 24.4 55.6 28.1 52.0 25.2 49.3 JAPE (Sun et al., 2017) 41.2 74.5 36.3 68.5 32.4 66.7 31.8 58.9 23.6 48.4 IPTransE (Zhu et al., 2017) 40.6 73.5 36.7 69.3 33.3 68.5 34.9 63.8 29.7 55.8 GCN-Align (Wang et al., 2018) 41.3 74.4 39.9 74.5 37.3 74.5 50.6 77.2 59.7 83.8 SEA (Pei et al., 2019a) 42.4 79.6 38.5 78.3 40.0 79.7 51.8 80.2 51.6 73.6 RSN (Guo et al., 2019) 50.8 74.5 50.7 73.7 51.6 76.8 60.7 79.3 68.9 87.8 KECG (Li et al., 2019a) 47.8 83.5 49.0 84.4 48.6 85.1 63.2 90.0 72.8 91.5 MuGNN (Cao et al., 2019) 49.4 84.4 50.1 85.7 49.5 87.0 61.6 89.7 74.1 93.7 AliNet (Sun et al., 2020) 53.9 82.6 54.9 83.1 55.2 85.2 69.0 90.8 78.6 94.3 BootEA (Sun et al., 2018) 62.9 84.8 62.2 85.4 65.3 87.4 74.8 89.8 76.1 89.4 GMNN (Xu et al., 2019) 67.9 78.5 74.0 87.2 89.4 95.2 93.0 99.6 94.4 99.8 RDGCN (Wu et al., 2019a) 70.8 84.6 76.7 89.5 88.6 95.7 97.9 99.1 94.7 97.3 NMN 73.3 86.9 78.5 91.2 90.2 96.7 98.1 99.2 96.0 98.2 w/o nbr-m 71.1 86.7 75.4 90.4 86.3 95.8 96.0 98.4 95.0 97.8 w/o nbr-s 73.0 85.6 77.9 88.8 89.9 95.7 98.0 99.0 95.9 98.1 Table 3: Performance on DBP15K and DWY100K. Models ZH-EN JA-EN FR-EN Hits@1 Hits@10 Hits@1 Hits@10 Hits@1 Hits@10 BootEA 12.2 27.5 27.8 52.6 32.7 53.2 GMNN 47.5 68.3 58.8 78.2 75.0 90.9 RDGCN 60.7 74.6 69.3 82.9 83.6 92.6 NMN 62.0 75.1 70.3 84.4 86.3 94.0 w/o nbr-m 52.0 71.1 62.1 82.7 80.0 92.0 w/o nbr-s 60.9 74.1 70.7 84.5 86.5 94.2 Table 4: Performance on S-DBP15K. that our sampling and matching modules are particularly important, when the neighborhood sizes of equivalent entities greatly differ and especially there may be few common neighbors in their neighborhoods. 5.2 Performance on S-DBP15K On the more sparse and challenging datasets SDBP15K, we compare our NMN model with the strongest structure-based model, BootEA, and GNN-based models, GMNN and RDGCN, which also utilize the entity name initialization. Baseline models. In Table 4, we can observe that all models suffer a performance drop, where BootEA endures the most significant drop. With the support of entity names, GMNN and RDGCN achieve better performances over BootEA. These results show when the alignment clues are sparse, structural information alone is not sufficient to support precise comparisons, and the entity name semantics are particularly useful for accurate alignment in such case. NMN. Our NMN outperforms all three baselines on all sparse datasets, demonstrating the effectiveness and robustness of NMN. As discussed in Sec. 1, the performances of existing embeddingbased methods decrease significantly as the gap of equivalent entities’ neighborhood sizes increases. Specifically, on DBP15KZH−EN, our NMN outperforms RDGCN, the best-performing baseline, by a large margin, achieving 65%, 53% and 48% on Hits@1 on the entity pairs whose number of neighbors differs by more than 10, 20 and 30, respectively. Sampling and matching strategies. When we compare NMN and NMN (w/o nbr-m) on the SDBP15K, we can see a larger average drop in Hits@1 than on the DBP15K (8.2% vs. 3.1%). The result indicates that our neighborhood matching module plays a more important role on the more sparse dataset. When the alignment clues are less obvious, our matching module can continuously amplify the neighborhood difference of an entity pair during the propagation process. In this way, the gap between the equivalent entity pair and the negative pairs becomes larger, leading to correct alignment. Compared with NMN, removing sampling module does hurt NMN in both Hits@1 and Hits@10 on S-DBP15KZH−EN. But, it is surprising that NMN (w/o nbr-s) delivers slightly better results than NMN on S-DBP15KJA−EN and SDBP15KFR−EN. Since the average number of neighbors of entities in S-DBP15K is much less than that in the DBP15K datasets. When the number of neighbors is small, the role of sampling will be unstable. In addition, our sampling method is relatively simple. When the alignment clues are very sparse, our strategy may not be robust enough. We will explore more adaptive sampling method and scope in the future. 5.3 Analysis Impact of neighborhood sampling strategies. To explore the impact of neighborhood sampling 6485 Figure 4: Comparison between our neighborhood sampling strategy and random sampling on S-DBP15K. 维亚康姆 雷神 拯救大兵 瑞恩 联合国际 影业 勇敢的心 Hollywood National Amusements Star Trek Into Darkness Saving Private Ryan Viacom 0.128 0.130 0.132 0.134 0.136 0.138 0.140 0.142 Figure 5: Visualization of attention weights in the neighborhood matching module for the example of Paramount Pictures. The green and blue words are two pairs of equivalent neighbors. strategies, we compare our NMN with a variant that uses random sampling strategy on S-DBP15K datasets. Figure 4 illustrates the Hits@1 of NMN using our designed graph sampling method (Sec. 3.3) and a random-sampling-based variant when sampling different number of neighbors. Our NMN consistently delivers better results compared to the variant, showing that our sampling strategy can effectively select more informative neighbors. Impact of neighborhood sampling size. From Figure 4, for S-DBP15KZH−EN, both models reach a performance plateau with a sampling size of 3, and using a bigger sampling size would lead to performance degradation. For S-DBP15KJA−EN and S-DBP15KFR−EN, we observe that our NMN performs similarly when sampling different number of neighbors. From Table 1, we can see that S-DBP15KZH−EN is more sparse than SDBP15KJA−EN and S-DBP15KFR−EN. All models deliver much lower performance on SDBP15KZH−EN. Therefore, the neighbor quality of this dataset might be poor, and a larger sampling size will introduce more noise. On the other hand, the neighbors in JA-EN and FR-EN datasets might be more informative. Thus, NMN is not sensitive to the sampling size on these two datasets. How does the neighborhood matching module work? In an attempt to understand how our neighborhood matching strategy helps alignment, we visualize the attention weights in the neighborhood matching module. Considering an equivalent entity pair in DBP15KZH−EN, both of which indicate an American film studio Paramount Pictures. From Figure 5, we can see that the five neighbors sampled by our sampling module for each central entity are very informative ones for aligning the two central entities, such as the famous movies released by Paramount Pictures, the parent company and subsidiary of Paramount Pictures. This demonstrates the effectiveness of our sampling strategy again. Among the sampled neighbors, there are also two pairs of common neighbors (indicate Saving Private Ryan and Viacom). We observe that for each pair of equivalent neighbors, one neighbor can be particularly attended by its counterpart (the corresponding square has a darker color). This example clearly demonstrates that our neighborhood matching module can accurately estimate the neighborhood similarity by accurately detecting the similar neighbors. 6 Conclusion We have presented NMN, a novel embedded-based framework for entity alignment. NMN tackles the ubiquitous neighborhood heterogeneity in KGs. We achieve this by using a new sampling-based approach to choose the most informative neighbors for each entity. As a departure from prior works, NMN simultaneously estimates the similarity of two entities, by considering both topological structure and neighborhood similarity. We perform extensive experiments on real-world datasets and compare NMN against 12 recent embedded-based methods. Experimental results show that NMN achieves the best and more robust performance, consistently outperforming competitive methods across datasets and evaluation metrics. Acknowledgments This work is supported in part by the National Hi-Tech R&D Program of China (No. 2018YFB1005100), the NSFC under grant agreements 61672057, 61672058 and 61872294, and a UK Royal Society International Collaboration Grant. For any correspondence, please contact Yansong Feng. 6486 References Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima’an. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1957–1967, Copenhagen, Denmark. Association for Computational Linguistics. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems 26, pages 2787–2795. Curran Associates, Inc. Yixin Cao, Zhiyuan Liu, Chengjiang Li, Zhiyuan Liu, Juanzi Li, and Tat-Seng Chua. 2019. Multi-channel graph neural network for entity alignment. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1452– 1461, Florence, Italy. Association for Computational Linguistics. Muhao Chen, Yingtao Tian, Kai-Wei Chang, Steven Skiena, and Carlo Zaniolo. 2018. Co-training embeddings of knowledge graphs and entity descriptions for cross-lingual entity alignment. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 3998–4004. ijcai.org. Muhao Chen, Yingtao Tian, Mohan Yang, and Carlo Zaniolo. 2017. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 1511–1517. ijcai.org. Lingbing Guo, Zequn Sun, and Wei Hu. 2019. Learning to exploit long-term relational dependencies in knowledge graphs. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 2505–2514. PMLR. Pili Hu and Wing Cheong Lau. 2013. A survey and taxonomy of graph sampling. arXiv preprint arXiv:1308.5865. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Bhushan Kotnis and Vivi Nastase. 2017. Analysis of the impact of negative sampling on link prediction in knowledge graphs. arXiv preprint arXiv:1708.06816. Chengjiang Li, Yixin Cao, Lei Hou, Jiaxin Shi, Juanzi Li, and Tat-Seng Chua. 2019a. Semi-supervised entity alignment via joint knowledge embedding model and cross-graph model. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2723–2732, Hong Kong, China. Association for Computational Linguistics. Yujia Li, Chenjie Gu, Thomas Dullien, Oriol Vinyals, and Pushmeet Kohli. 2019b. Graph matching networks for learning the similarity of graph structured objects. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 915 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 3835–3845. PMLR. Yujia Li, Richard Zemel, Marc Brockschmidt, and Daniel Tarlow. 2016. Gated graph sequence neural networks. In Proceedings of ICLR’16. Farzaneh Mahdisoltani, Joanna Biega, and Fabian M. Suchanek. 2015. YAGO3: A knowledge base from multilingual wikipedias. In CIDR 2015, Seventh Biennial Conference on Innovative Data Systems Research, Asilomar, CA, USA, January 4-7, 2015, Online Proceedings. www.cidrdb.org. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1506–1515, Copenhagen, Denmark. Association for Computational Linguistics. Shichao Pei, Lu Yu, Robert Hoehndorf, and Xiangliang Zhang. 2019a. Semi-supervised entity alignment via knowledge graph embedding with awareness of degree difference. In The World Wide Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pages 3130–3136. ACM. Shichao Pei, Lu Yu, and Xiangliang Zhang. 2019b. Improving cross-lingual entity alignment via optimal transport. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 3231–3237. ijcai.org. Afshin Rahimi, Trevor Cohn, and Timothy Baldwin. 2018. Semi-supervised user geolocation via graph convolutional networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2009–2019, Melbourne, Australia. Association for Computational Linguistics. John W. Raymond, Eleanor J. Gardiner, and Peter Willett. 2002. RASCAL: calculation of graph similarity using maximum common edge subgraphs. Comput. J., 45(6):631–644. 6487 Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In The Semantic Web - 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3-7, 2018, Proceedings, volume 10843 of Lecture Notes in Computer Science, pages 593–607. Springer. Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. 2015. Highway networks. arXiv preprint arXiv:1505.00387. Zequn Sun, Wei Hu, and Chengkai Li. 2017. Cross-lingual entity alignment via joint attributepreserving embedding. In The Semantic Web - ISWC 2017 - 16th International Semantic Web Conference, Vienna, Austria, October 21-25, 2017, Proceedings, Part I, volume 10587 of Lecture Notes in Computer Science, pages 628–644. Springer. Zequn Sun, Wei Hu, Qingheng Zhang, and Yuzhong Qu. 2018. Bootstrapping entity alignment with knowledge graph embedding. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 4396–4402. ijcai.org. Zequn Sun, Chengming Wang, Wei Hu, Muhao Chen, Jian Dai, Wei Zhang, and Yuzhong Qu. 2020. Knowledge graph alignment network with gated multi-hop neighborhood aggregation. In AAAI. Bayu Distiawan Trisedya, Jianzhong Qi, and Rui Zhang. 2019. Entity alignment between knowledge graphs using attribute embeddings. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 297–304. Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua Bengio. 2018. Graph Attention Networks. In ICLR. Zhichun Wang, Qingsong Lv, Xiaohan Lan, and Yu Zhang. 2018. Cross-lingual knowledge graph alignment via graph convolutional networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 349– 357, Brussels, Belgium. Association for Computational Linguistics. Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, Rui Yan, and Dongyan Zhao. 2019a. Relation-aware entity alignment for heterogeneous knowledge graphs. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5278–5284. ijcai.org. Yuting Wu, Xiao Liu, Yansong Feng, Zheng Wang, and Dongyan Zhao. 2019b. Jointly learning entity and relation representations for entity alignment. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 240– 249, Hong Kong, China. Association for Computational Linguistics. Kun Xu, Liwei Wang, Mo Yu, Yansong Feng, Yan Song, Zhiguo Wang, and Dong Yu. 2019. Crosslingual knowledge graph alignment via graph matching neural network. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3156–3161, Florence, Italy. Association for Computational Linguistics. Xifeng Yan, Philip S. Yu, and Jiawei Han. 2004. Graph indexing: A frequent structure-based approach. In Proceedings of the ACM SIGMOD International Conference on Management of Data, Paris, France, June 13-18, 2004, pages 335–346. ACM. Hsiu-Wei Yang, Yanyan Zou, Peng Shi, Wei Lu, Jimmy Lin, and Xu Sun. 2019. Aligning cross-lingual entities with multi-aspect information. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4431–4441, Hong Kong, China. Association for Computational Linguistics. Rui Ye, Xin Li, Yujie Fang, Hongyu Zang, and Mingzhong Wang. 2019. A vectorized relational graph convolutional network for multi-relational network alignment. In Proceedings of the TwentyEighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 1016, 2019, pages 4135–4141. ijcai.org. Qingheng Zhang, Zequn Sun, Wei Hu, Muhao Chen, Lingbing Guo, and Yuzhong Qu. 2019. Multi-view knowledge graph embedding for entity alignment. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5429–5435. ijcai.org. Hao Zhu, Ruobing Xie, Zhiyuan Liu, and Maosong Sun. 2017. Iterative entity alignment via joint knowledge embeddings. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 4258–4264. ijcai.org. Qiannan Zhu, Xiaofei Zhou, Jia Wu, Jianlong Tan, and Li Guo. 2019. Neighborhood-aware attentional representation for multilingual knowledge graphs. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 1943–1949. ijcai.org.
2020
578
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6488–6494 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6488 Relation Extraction with Explanation Hamed Shahbazi, Xiaoli Z. Fern, Reza Ghaeini, Prasad Tadepalli School of Electrical Engineering and Computer Science, Oregon State University Corvallis, OR, USA {shahbazh, xfern, ghaeinim, tadepall}@oregonstate.edu Abstract Recent neural models for relation extraction with distant supervision alleviate the impact of irrelevant sentences in a bag by learning importance weights for the sentences. Efforts thus far have focused on improving extraction accuracy but little is known about their explainability. In this work we annotate a test set with ground-truth sentence-level explanations to evaluate the quality of explanations afforded by the relation extraction models. We demonstrate that replacing the entity mentions in the sentences with their fine-grained entity types not only enhances extraction accuracy but also improves explanation. We also propose to automatically generate “distractor” sentences to augment the bags and train the model to ignore the distractors. Evaluations on the widely used FB-NYT dataset show that our methods achieve new state-of-the-art accuracy while improving model explainability. 1 Introduction Relation extraction with distant supervision associates a pair of entities with a bag of sentences, each containing mentions of both entities. The bag is tagged with relations between the pair in a Knowledge Base (KB), without explicitly indicating which sentence(s) support the relation(s). This method avoids the burden of manual annotations, but presents inherent ambiguity, creating challenges for learning. To alleviate the impact of the irrelevant sentences many approaches have been proposed including models based on attention (Zeng et al., 2015; Lin et al., 2016; Liu et al., 2017; Luo et al., 2017; Du et al., 2018; Wang et al., 2018; Peng and Denilson, 2019; Bai and Ritter, 2019), approaches that use additional resources (Vashishth et al., 2018; Liu et al., 2018) and methods that utilize supervision data (Pershina et al., 2014; Angeli et al., 2014; Beltagy et al., 2019). These studies primarily focus on improving relation extraction accuracy and little is known about whether the models are making right decision for the right reason or because of some irrelevant biases (Agrawal et al., 2016; Gururangan et al., 2018; Ghaeini et al., 2019). This paper examines two strong baseline relation extraction models with several explanation mechanisms. We manually annotated a test set from the widely used FB-NYT dataset with ground truth explanations to evaluate the quality of the explanation afforded by these models. We also introduce two different methods for improving relation extraction. First, we demonstrate that replacing the entity mentions with their fine-grained entity types for sentence representation leads to improvement in both the extract accuracy and model explainability. Second, we augment the bags with automatically generated “distractor” sentences (i.e., sentences that contain no supporting information for the relation) and train the model to appropriately ignore the irrelevant information. Our evaluation on the widely used FB-NYT dataset verifies that the proposed methods achieve the new state of the art for the extraction performance along with improved model explainability. 2 Problem Setup Given entity pair (ei, ej), we form a bag Bi,j = {s1, . . . sNij} with Nij sentences that contain mentions of both entities and label it by the set of relations between ei and ej from the KB. Neural models for relation extraction encode each sentences into a vector representation and a bag Bi,j is thus represented by {x1, . . . xN ij} where xi ∈Rd. Given a set of bags and the associated labels, the training objective is to learn a model that predicts the probability P(r = k|Bi,j) that relation k exists between ei and ej based on Bi,j, where k ∈1 . . . K and K is the total number of relations 6489 in the KB. There are zero to multiple possible relation labels for each bag. Importantly, only some sentences in the bag express any of the relations and the others are irrelevant (provide no information regarding the relations), but such sentences are not labeled. 3 Baseline Models We consider two baselines. The first is DirectSup, a recent model achieving the state-of-the-art performance by utilizing auxiliary supervision (Beltagy et al., 2019). The second baseline (CNNs+ATT) revamps the classic attention based method by Lin et al. (2016) but adopts the same sentence encoder as DirectSup for ease of comparisons. In this work, we add a ReLU at the end of the sentence encoder (Beltagy et al., 2019) to produce positive sentence representations. See (Beltagy et al., 2019) for detailed information regarding the sentence encoder. DirectSup. Given a bag of sentences, DirectSup encodes each sentence using CNNs with different filter sizes. The outputs of the CNNs with different filter sizes are concatenated to produce the encoding of the sentence. Given a bag B and the encoding of its sentences {x1, x2, ..., xN}, DirectSup assigns an importance weight for each sentence based on the output of a binary classifier learned from an additional direct supervision data in a multi-task manner. Given a sentence encoding xn, the binary classifier provides a weight αn ∈[0, 1] indicating the likelihood that xn expresses some form of relations in the KB. As a result, for a bag Bi,j, we have importance weights {α1, . . . , αN}. It then produces a single bag representation as follows: ¯x = Max-pool({α1x1, . . . , αnxN}) (1) and the prediction for relation k is given by: P(r = k|B) = σ(¯x· rk + bk) (2) where rk is an embedding of relation k, bk is a bias variable and σ is the Sigmoid function. CNNs+ATT. This model uses the same sentence encoder as DirectSup but differs in the attention mechanism used to decide sentence importance. Specifically, it follows Lin et al. (2016) and computes the importance weights of the sentences in bag B with encodings {x1, . . . , xN} as follows: αk,n = exp(xnAqk) PN i=1 exp(xiAqk) (3) where qk is a learned query vector associated with relation k and A is a diagonal matrix. Given {αk,1, ..., αk,N}, we compute a bag representation specific for relation k by: ¯xk = N X n=1 αk,nxn (4) and the prediction for relation k is given by: P(r = k|B) = σ(¯xk· rk + bk) (5) where rk is relation k’s embedding and bk is the bias. Entity embedding. Prior work has demonstrated that incorporating entity embeddings into the relation extraction model leads to improved accuracy (Ji et al., 2017; Beltagy et al., 2019). Here we also consider this strategy with the baseline models. Specifically, let vi and vj be the entity embedding of ei and ej, we concatenate the bag representations ¯x with vi −vj and vi ◦vj, where ◦is element-wise product. We then apply a linear project layer with ReLU to produce a new bag representation for final prediction with Eq. 2 and 5. For any entity ei its embedding vector vi is obtained by concatenating the average of its skipgram (Mikolov et al., 2013) word embeddings and the embeddings produced by Zhang et al. (2019) (produced by using TransE on Wikipedia factual tuples). Training objective. For all the models in this work we use the binary cross entropy loss function for training: l = − X Bi,j K X k=1 1i,j,k log P(r = k|Bi,j)+ (1 −1i,j,k) log (1 −P(r = k|Bi,j)) (6) where 1i,j,k is an indicator function that takes value 1 if relation k exists for bag Bi,j. 4 Explanation Mechanisms The importance weights (α’s, aka attention), generated by the models can be interpreted as explanations. However, recent studies (Ghaeini et al., 2018; Jain et al., 2019; Wiegreffe and Pinter, 2019) have questioned the validity of attention as a faithful explanation of model’s behavior. Thus we consider the following additional explanation mechanisms: Saliency. Recent works show that a model’s prediction can be explained by examining the input 6490 saliency, based on the gradient of the output w.r.t. the inputs (Simonyan et al., 2012; Ross et al., 2017; Ghaeini et al., 2019). We define the saliency of sentence n for relation k, denoted by Sxn,k, as the L1 norm of the gradient of relation k logit ok with respect to xn.(Appendix. A.1). Gradient × input. This is a commonly used measure for input attributions (Shrikumar et al., 2016; Selvaraju et al., 2019). We will refer to this measure as GIxn,k, computed as P i xn[i] × ∂ok ∂xn [i]. Leave One Out (loo). This measures the sensitivity of ok to the removal of a sentence. We refer to this measure as looxn,k = (ok −ok,−n), where ok,−n is the new logit of relation k after removing sentence xn from its bag. 5 Proposed Methods We propose two different approaches for improving relation extraction. The first method we propose, introduces a subtle change to the representation of the sentences, which lead to higher performance and better explanation quality. We further propose to automatically generate “distractor” sentences and train the model to appropriately ignore them. Sentence representation. Each sentence in a bag contains entity mentions mi and mj for entities ei and ej respectively. In prior work mi and mj are kept unchanged (Lin et al., 2016; Beltagy et al., 2019). We argue that when entity mentions are used to compute the sentence representation, they provide such rich information that the model may not need to look at the rest of the sentence to deduce a relation. To ensure that our predictions are supported by appropriate sentences, we need to remove this effect. We propose to replace the entity mentions with their Fine-Grained Entity Types (FGET) Ling and Weld (2012) to force the model to identify the relations through the sentences. Learning from distractors. Prior work studied learning from human provided rationales (Lei et al., 2016; Ross et al., 2017; Bao et al., 2018; Ghaeini et al., 2019) in order to improve model explainability. However, human rationales are expensive to acquire. In this work we propose to learn from automatically generated “distractor” sentences. Let Bi,j be a positive training bag (contains at least one relation) with entities (ei, ej) of FGET (ti, tj). Let Rij(|Rij| > 1) be the set of annotated relations for Bi,j. For each k in Rij, we sample a “distractor” sentence s′ k from the set of sentences in the training set such that 1) it belongs to a bag whose FGET is (ti, tj) 2) the bag is not annotated with relation label k. If s′ k is not found this way, we simply choose a random sentence from a random negative bag (bag with no relation). Given s′ k, we replace its entity mentions with ei and ej (or ti and tj for FGET-based sentence representation) of a sentence in Bi,j and add it to the bag, resulting in an augmented bag B′ i,j for relation k. To learn from the augmented bags, we feed B′ i,j into the model and the goal is to lower the contribution of the distractor sentence in relation to the original sentences in the bag. Specifically, we use GI to measure the sentence-level contribution and define the distractor loss for relation k as follows: l′ d,k = max(0, γ + GIx′ k,k −max x∈Bi,jGIx,k) +|GIx′ k,k| (7) where x′ k is the encoding of distractor sentence s′ k and γ is a hyper-parameter for margin. The first term ensures that the contribution of the distractor is lower than the maximum contribution of all the sentences in the original bag and the second term reduces the absolute contribution of the distractor. Although we use GI in Eq.7, other explanation measures such as saliency or the positive portion of the contributions can also be applied here. Moreover a more advanced mechanism for generating distractors will likely lead to a higher performance. We hence update the loss in Eq. 6 with: lm = l + λl′ d (8) where l′ d = P k l′ d,k and λ tradeoffs the regular learning loss with the distractor loss. 6 Experiments In this section, we empirically evaluate our proposed methods both in terms of their relation extraction performance and their explainability. 6.1 Dataset and Setup Dataset. Similar to our baselines and prior work, we use the modified version of FB-NYT dataset. The original FB-NYT dataset was built by Riedel et al. (2010) on New York Times articles which was aligned to Freebase facts. It later was modified by Lin et al. (2016). There are 52 relations in this dataset where “place lived”, “captial”, “neighborhood of”, “natinality” and “location” are the most frequent relations. Tab. 1 shows the size of the modified dataset. 6491 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Recall 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Precision CNNs+Att +F DirectSup CNNs+Att +LD CNNs+Att +F +LD DirectSup +F +LD Figure 1: PR without entity 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Recall 0.4 0.5 0.6 0.7 0.8 0.9 1.0 CNNs+Att +F +E DirectSup +E CNNs+Att +LD +E CNNs+Att +F +LD +E DirectSup +F +LD +E Figure 2: PR with entity Train Test Sentences 472,963 172,448 Positive bags 16,625 1,950 Negative bags 236,811 94,917 Table 1: FB-NYT modified dataset. Setup and Training. All models are implemented in PyTorch, trained with a Adam optimizer with learning rate 0.001 for a maximum of 30 epochs. We use 300-d skip-gram (Mikolov et al., 2013) word embeddings and FGET embeddings and 5-d position embedding. During training we freeze the word and entity embeddings. All reported results are averaged over three different random runs. We train on 90% of the training set and keep the remaining 10% for validation. We select λ from the set {0.01, 0.1, 1.0, 10.0, 100.0} and set λ = 1.0 based on validation AUC and the margin is fixed at γ = 0.00001. Ground-truth explanations. There are 1950 positive bags (6444 sentences) in the test split of FBNYT. For each pair of sentence-relation in a bag we annotate whether the sentence entails the relation or not. Based on the annotations, we extract a set called expl-eval (see Appendix A.2 for details) including tuples of (bag-id, relation, positive sentence in bag, negative sentence in bag). Each tuple provides a desired ordering of two sentences when measuring their importance to the model. expl-eval is then used to compute the Kendall Tau correlation between the annotation and the explanations, which measures how consistently the importance weights ranks the sentences compared to the ground truth. model AUC (-E) AUC (+E) CNNs+ATT 25.1 DirectSup 26.4 28.1 CNNs+ATT +F 26.1 31.5 DirectSup +F 26.9 33.3 CNNs+ATT +FE 27.4 33.1 DirectSup +FE 27.6 33.4 CNNs+ATT +LD 27.1 33.6 CNNs+ATT +F +LD 27.7 33.9 DirectSup +F +LD 27.8 34.1 F: Replace entity mention with FGET FE: Replace entity mention with concatenation of FGET and entity mention LD: Learning from distractor Table 2: AUC results on FB-NYT. 6.2 Relation Extraction Performance Similar to prior work we use precision-recall (PR) curves to characterize the extraction performance and report the area under the PR curve (AUC) up to 0.4 recall. Tab. 2 reports the AUCs of the baselines and different variants of our proposed models with (+E) and without (-E) incorporating entity embeddings. Specifically, we consider two different ways of incorporating the FGET representations. Rows 3-4 show the AUCs of the two baseline models when we replace entity mentions with their FGET (+F), whereas rows 5-6 show the AUCs when we concatenate the FGET with the entity mentions (+FE). From the results we can see that both baselines see clear performance gain from incorporating FGET into the representations. Combining FGET with entity mention (+FE) achieves higher performance than using only FGET (+F), but our hypothesis is that the former will lead to less explainable models, which we will examine in the next section. Finally the last three rows of the table show that adding LD to different base models can further improve 6492 model loo (H) loo (L) Sxn,k(H) Sxn,k(L) GIxn,k(H) GIxn,k(L) αxn(H) αxn(L) CNNs+ATT 0.16 -0.08 0.19 -0.02 0.20 0.04 0.69 0.21 DirectSup 0.19 0.12 0.08 0.15 0.29 0.19 0.26 -0.12 CNNs+ATT +F 0.21 0.10 0.36 0.03 0.23 0.00 0.73 0.11 DirectSup +F 0.24 0.15 0.31 -0.19 0.40 -0.17 0.28 0.15 CNNs+ATT +FE 0.01 -0.11 0.21 -0.14 0.20 -0.20 0.24 0.01 DirectSup +FE 0.14 -.12 0.19 -0.10 0.29 0.06 0.17 -0.11 CNNs+ATT +LD 0.18 -0.01 0.22 0.10 0.21 0 0.67 0.11 CNNs+ATT +LD +F 0.22 -0.11 0.43 0.09 0.28 0.07 0.70 0.12 DirectSup +LD +F 0.23 0.14 0.38 0.01 0.49 0.20 0.45 0.02 H: High confidence P (r) ∈[0.76, 1.0] L: Low confidence P (r) ∈[0, 0.25] Table 3: Kendall correlations for top confidence and least confidence range. the AUCs. Similar to prior work, we observe that incorporating entity embeddings(+E) to the model leads to substantial performance gain across the board. We also observe very similar performance gain when adding FGET and LD to the base models both with and without entity embeddings. Our best model achieved an AUC of 0.341, which improves the previous state-of-the-art by 5.7%. 6.3 Evaluation of Explanations We apply the explanation mechanisms described in Section 4 to produce sentence importance scores for the test set and compute the Kendall Tau correlations for the importance scores using expl-eval. For each model, to understand its behavior when it predicts correctly versus incorrectly, we consider the subset H (L) of bags/relations that the model outputs high (low) probability, i.e., p ∈[0.76, 1] ([0, 0.25]), for the correct relation. We report the performance on H and L separately in Tab. 3. Comparing correlation values for H and L in Tab. 3, we observe that when the models are making correct and confident predictions (H), the values of correlation tend to be higher. In contrast, when the model fails to detect the correct relation (L), we see substantially lower correlation scores. By replacing entity mentions with their FGET in both CNNs+ATT and DirectSup (+F), we observe substantially increased correlation scores for correct predictions (H). The improvement is consistent across all methods that are used to compute the importance scores. Recall that Tab. 2 shows that concatenating FGET with entity mention (+FE) yields improved relation extraction performance for both CNNs+ATT and DirectSup. In contrast, the explanation results presented here show that this comes at the cost of explainability, as demonstrated by the substantially lower correlation scores of CNNs+ATT+FE and DirectSup+FE. This confirms our conjecture that removing entity mentions from the sentence representation leads to more explainable models, possibly by forcing the model to focus on the textual evidence contained in the sentence rather than the word embedding of the mentions. Finally, we note that adding LD further improves the correlation score on H for S, GI and α. This suggests that learning from distractors is a valuable strategy that not only produces better relation extraction performance, but also enhances the model explanability. 7 Conclusion In this work we provided an annotated test set with ground-truth sentence-level explanations to evaluate the explanation quality of relation extraction models with distant supervision. Our examination of two baselines show that a model with lower relation extraction accuracy could have higher explanation quality. We proposed methods to improve both the accuracy and explainability. Our proposed methods are based on changing the representation of the sentences and learning from distractor to teach the model to ignore irrelevant information in a bag. Our evaluation on the widely used FBNYT dataset show the effectiveness of our method in achieving state-of-the art performance in both accuracy and explanation quality. References Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. 2016. Analyzing the behavior of visual question answering models. EMNLP. Gabor Angeli, Julie Tibshirani, Jean Wu, and Christopher D Manning. 2014. Combining distant and partial supervision for relation extraction. EMNLP. 6493 Fan Bai and Alan Ritter. 2019. Structured minimally supervised learning for neural relation extraction. NAACL. Yujia Bao, Shiyu Chang, Mo Yu, and Regina Barzilay. 2018. Deriving machine attention from human rationales. EMNLP. Iz Beltagy, Kyle Lo, and Waleed Ammar. 2019. Combining distant and direct supervision for neural relation extraction. NAACL. Jinhua Du, Jingguang Han, Andy Way, and Dadong Wan. 2018. Multi-level structured self-attentions for distantly supervised relation extraction. EMNLP. Reza Ghaeini, Xiaoli Z. Fern, Hamed Shahbazi, and Prasad Tadepalli. 2019. Saliency learning: Teaching the model where to pay attention. NAACL. Reza Ghaeini, Xiaoli Z. Fern, and Prasad Tadepalli. 2018. Interpreting recurrent and attention-based neural models: a case study on natural language inference. EMNLP. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. NAACL. Sarthak Jain, , and Byron C. Wallace. 2019. Attention is not explanation. NAACL. Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. 2017. Distant supervision for relation extraction with sentence-level attention and entity descriptions. AAAI. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. EMNLP. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Su. 2016. Neural relation extraction with selective attention over instances. ACL. Xiao Ling and Daniel S. Weld. 2012. Fine-grained entity recognition. AAAI. Tian Yu Liu, Kexiang Wang, Baobao Chang, and Zhifang Sui. 2017. A soft-label method for noise tolerant distantly supervised relation extraction. EMNLP. Tianyi Liu, Xinsong Zhang, Wanhao Zhou, and Weijia Jia. 2018. Neural relation extraction via innersentence noise reduction and transfer learning. EMNLP. Bingfeng Luo, Yansong Feng, Zheng Wang, Zhanxing Zhu, Songfang Huang, Rui Yan, and Dongyan Zhao. 2017. Learning with noise: Enhance distantly supervised relation extraction with dynamic transition matrix. ACL, arXiv:1503.06733. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. NeurIPS. Xu Peng and Barbosa Denilson. 2019. Connecting language and knowledge with heterogeneous representations for neural relation extraction. NAACL. Maria Pershina, Bonan Min, Wei Xu, and Ralph Grishman. 2014. Infusion of labeled data into distant supervision for relation extraction. ACL. Sebastian Riedel, Limin Yao, , and Andrew D McCallum. 2010. Modeling relations and their mentions without labeled text. ECML/PKDD. Andrew Slavin Ross, Michael C. Hughes, and Finale. 2017. Right for the right reasons: Training differentiable models by constraining their explanations. IJCAI. Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2019. Grad-cam: Visual explanations from deep networks via gradient-based localization. International Journal of Computer Vision. Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. 2016. Not just a black box: Learning important features through propagating activation differences. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2012. Deep inside convolutional networks: Visualising image classification models and saliency maps. CoRR, abs/1312.6034. Shikhar Vashishth, Rishabh Joshi, Sai Suman Prayaga, Chiranjib Bhattacharyya, and Partha Talukdar. 2018. Reside: Improving distantly-supervised neural relation extraction using side information. EMNLP. Guanying Wang, Wen Zhang, Ruoxu Wang, Yalin Zhou, Xi Chen, Wei Zhang, Hai Zhu, and Huajun Chen. 2018. Label-free distant supervision for relation extraction via knowledge graph embedding. EMNLP. Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. NAACL. Daojian Zeng, Joel R. Tetreault, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. EMNLP. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. Ernie: Enhanced language representation with informative entities. ACL. 6494 A Supplemental Material A.1 Saliency and (Gradient × input) Assume that a neural model outputs a logit score o which is a differentiable function and parameterized by x ∈Rd, θ and etc. The Taylor series of the given function o near input a is given by: o(x) = o(a)+ ∂o ∂x(a)(x−a)+ 1 2! ∂o2 ∂x2 (a)(x−a)2+. . . (9) Approximating the function o as a linear function, the first order approximation of the Taylor series is given by: o(x) ≈∂o ∂x(a)x + b (10) Note that ∂o ∂x(a) ∈Rd. Therefore for each dimension i the bigger ∂o ∂x(a)[i] , the more (positive or negative) the impact of a[i] is on o. The whole impact of a on o is given by P i ∂o ∂x(a)[i] of its absolute value P i | ∂o ∂x(a)[i]|. Regarding our task, the logit score of the model for a relation k is ok. For a given sentence xn, the amount of positive or negative impact of xn on ok is approximated by P i | ∂ok ∂x (xn)[i]| which is saliency. The (Gradient × input) for a given sentence xn is equivalent to the linear approximation of ok at xn which is P i xn[i] × ∂ok ∂x (xn)[i]. A.2 Ground-truth explanation set. We annotate the positive bags of the test split of FB-NYT with ground-truth explanations. There are 1950 bags and 6444 sentences. For each pair of (sentence, relation) in a bag, the sentence is either a rationale (supportive) to the relation or it is irrelevant. For example: entity pair: ( namibia , windhoek ) relation: /location/country/capital rationale : “‘the magistrate also continued mr. alexander ’s bail conditions , including a bond of 10 million namibian dollars about 1.4 million and restrictions on his movements to the magisterial district of windhoek , namibia ’s capital“‘ irrelevant : “‘mr. alexander also placed full page ads in local newspapers proclaiming his commitment to investing in namibia , and has mounted a large billboard conveying the same message opposite government park in windhoek “‘ Following the annotation of the sentence-relation contributions which is either rationale or irrelevant, we extract a set “expl-eval” (which is going to be used to evaluate the explanation quality of the models) as follows: expl−eval = s e t ( ) For each ( bag−id , bag ) : For each r e l a t i o n l a b e l k given to the bag : For each p a i r of r a t i o n a l e s+ an i r r e l e v a n t s− f o r k : expl−eval . add ( ( bag−id , k , s+ , s−) ) The size of the generated expl-eval is 1097 tuples of (bag-id, k, rationale sentence, irrelevant sentence). Please note that the relation label k is one of the ground-truth labels assigned to bag-id.
2020
579
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 619–624 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 619 Learning to Tag OOV Tokens by Integrating Contextual Representation and Background Knowledge Keqing He, Yuanmeng Yan, Weiran Xu ∗ Pattern Recognition & Intelligent System Laboratory School of Information and Communication Engineering Beijing University of Posts and Telecommunications, Beijing, China {kqin,yanyuanmeng,xuweiran}@bupt.edu.cn Abstract Neural-based context-aware models for slot tagging have achieved state-of-the-art performance. However, the presence of OOV(outof-vocab) words significantly degrades the performance of neural-based models, especially in a few-shot scenario. In this paper, we propose a novel knowledge-enhanced slot tagging model to integrate contextual representation of input text and the large-scale lexical background knowledge. Besides, we use multilevel graph attention to explicitly model lexical relations. The experiments show that our proposed knowledge integration mechanism achieves consistent improvements across settings with different sizes of training data on two public benchmark datasets. 1 Introduction Slot tagging is a critical component of spoken language understanding(SLU) in dialogue systems. It aims at parsing semantic concepts from user utterances. For instance, given the utterance ”I’d also like to have lunch during my flight” from the ATIS dataset, a slot tagging model might identify lunch as a meal description type. Given sufficient training data, recent neural-based models (Mesnil et al., 2014; Liu and Lane, 2015, 2016; Goo et al., 2018; Haihong et al., 2019; He et al., 2020) have achieved remarkably good results. However, these works often suffer from poor slot tagging accuracy when rare words or OOV( out-of-vocab) words exist. (Ray et al., 2018) has verified the presence of OOV words further degrades the performance of neural-based models, especially in a few-shot scenario where training data can not provide adequate contextual semantics. Previous context-aware models merely focus on how to capture deep contextual semantics to aid ∗Weiran Xu is the corresponding author. playlist broadcast music genre classical music popular jazz scat singing hyponyms sister term train set can you append some classical music to my playlist O O O O B-music_type I-music_type O O O find and add some scat singing to my broadcast O O O O O O O test set semantic synset O O (context-aware model) B-music_type I-music_type (knowledge integration) Figure 1: An example of slot tagging in the few-shot scenario where scat singing is unseen in the training set. The prior context-aware model fails to recognize its correct type because of low-coverage contextual information. After integrating background knowledge from WordNet, it succeeds to reason the correct type via lexical relations. in recognizing slot entities, while neglecting ontology behind the words or large-scale background knowledge. Explicit lexical relations are vital to recognizing unseen words when there is not adequate training data, that is, few-shot scenarios. Fig 1 gives a motivating example of slot tagging to explain the phenomenon. This example suggests slot tagging requires not only understanding the complex linguistic context constraints but also reasoning explicit lexical relations via large-scale background knowledge graphs. Previous state-of-the-art context-aware models (Goo et al., 2018; Haihong et al., 2019) only learn contextual information based on a multi-layer BiLSTM encoder and self-attention layer. (Dugas and Nichols, 2016; Williams, 2019; Shah et al., 2019) use handcrafted lexicons (also known as gazettes or dictionaries), which are typically collections of phrases semantically related, to improve slot tagging. One major limitation is that lexicons collected by domain experts are relatively small on the scale and fail to model complicated relations 620 between words, such as relation hierarchy. In this paper, we propose a novel knowledgeenhanced method for slot tagging by integrating contextual representation of input text and the largescale lexical background knowledge, enabling the model to reason explicit lexical relations. We aim to leverage both linguistic regularities covered by deep LMs and high-quality knowledge derived from curated KBs. Consequently, our model could infer rare and unseen words in the test dataset by incorporating contextual semantics learned from the training dataset and lexical relations from ontology. As depicted in Fig 2, given an input sequence, we first retrieve potentially relevant KB entities and encode them into distributed representations that describe global graph-structured information. Then we employ a BERT (Devlin et al., 2019) encoder layer to capture context-aware representations of the sequence and attend to the KB embeddings using multi-level graph attention. Finally, we integrate BERT embeddings and the desired KB embeddings to predict the slot type. Our main contributions are three-fold: (1) We investigate and demonstrate the feasibility of applying lexical ontology to facilitate recognizing OOV words in the few-shot scenario. To the best of our knowledge, this is the first to consider the large-scale background knowledge for enhancing context-aware slot tagging models. (2) We propose a knowledge integration mechanism and use multi-level graph attention to model explicit lexical relations. (3) Plenty of experiments on two benchmark datasets show that our proposed method achieves consistently better performance than various state-of-theart context-aware methods. 2 Our Approach In this work, we consider the slot tagging task in the few-shot scenario, especially for OOV tokens. Given a sequence with n tokens X = {xi}n i=1, our goal is to predict a corresponding tagging sequence Y = {yi}n i=1. This section first explains our BERT-based model and then introduces the proposed knowledge integration mechanism for inducing background commonsense. The overall model architecture is illustrated in Fig 2. 2.1 BERT-Based Model for Slot Tagging The model architecture of BERT is a multi-layer bidirectional Transformer encoder. The input representation is a concatenation of WordPiece emKnowledge Integration Layer x1 x2 xn h1 h2 hn y1 y2 yn hi c1 c2 cm sentinel C1(xi) C2(xi) fi ... … … BiLSTM Matching Layer CRF Layer Figure 2: The overall architecture of the proposed slot tagging model. beddings (Wu et al., 2016), positional embeddings, and the segment embeddings. Inspired by previous RNN-based works (Mesnil et al., 2014; Liu and Lane, 2016), we extend BERT to a slot tagging model. We first feed the input sequence X = {xi}n i=1 to a pre-trained BERT encoding layer and then get final hidden states H = (h1, ..., hn). To make this procedure compatible with the original BERT tokenization, we feed each input word into a WordPiece tokenizer and use the hidden state corresponding to the first sub-word as input to the softmax classifier. yi = softmax (Whi + b) , i ∈1 . . . n (1) where hi ∈Rd1 is the hidden state corresponding to the first sub-word of the i-th input word xi and yi is the slot label. 2.2 Knowledge Integration Mechanism The knowledge integration mechanism aims at enhancing the deep contextual representation of input text via leveraging the large-scale lexical background knowledge, Wordnet (Miller, 1995), to recognize unseen tokens in the training set. Essentially, it applies multi-level graph attention to KB embeddings with the BERT representations from the previous layer to enhance the contextual BERT embeddings with human-curated background knowledge. We first introduce the KB embedding and retrieval process. In this paper, we use the lexical KB, WordNet, stored as (subject, relation, object) triples, where each triple indicates a specific relation between word synsets, e.g., (state, hypernym621 of, california). Each synset expresses a distinct concept, organized by a human-curated tree hierarchy. KB Embeddings We represent KB concepts as continuous vectors in this paper. The goal is that the KB tuples (s, r, o) can be measured in the dense vector space based on the embeddings. We adopt the BILINEAR model (Yang et al., 2014) which measures the relevance via a bilinear function: f(s, r, o) = sT Mro, where s, o ∈Rd2 are the vector embeddings for s, o respectively and and Mr is a relation-specific embedding matrix. Then we train the embeddings using the max-margin ranking objective: X q=(s,r,o)∈T X q′=(s,r,o′)∈T ′ max  0, 1 −Sq + Sq′ (2) where T denotes the set of triples in the KB and T ′ denotes the negative triples that are not observed in the KB. Finally we can acquire vector representations for concepts of the KB. Because we mainly focus on the slot tagging task, and the datasets are relatively small for joint learning KB embeddings. Furthermore, the KB contains many triplets not present in the ATIS and Snips dataset. Therefore we pre-train the KB vectors and keep them fixed while training the whole model to reduce the complexity. KB Concepts Retrieval We need to retrieve all the concepts or synsets relevant to the input word xi from the KB. Different from (Yang and Mitchell, 2017; Yang et al., 2019), for a word xi, we first return its synsets as the first-level candidate set C1(xi) of KB concepts. Then we construct the second-level candidate set C2(xi) by retrieving all the direct hyponyms of each synset in C1(xi), as shown in the right part of Fig 2. Multi-Level Graph Attention After obtaining the two-level concept candidate sets, we apply the BERT embedding hi of input token xi to attending over the multi-level memory. The first-level attention, α, is calculated by a bilinear operation between hi and each synset cj in the first level set C1(xi): αij ∝exp(cT j W1hi) (3) Then we add an additional sentinel vector c (Yang and Mitchell, 2017) and accumulate all the embeddings as follows: s1 i = X j αijcj + γic (4) ATIS Snips Vocabulary Size 722 11,241 Percentage of OOV words 0.77% 5.95% Number of Slots 120 72 Training Set Size 4,478 13,084 Development Set Size 500 700 Testing Set Size 893 700 Table 1: Statistics of ATIS and Snips datasets. where γi is similar to αij and P j αij + γi = 1. Here s1 i is regarded as a one-hop knowledge state vector for it only represents its directly linked synsets. Therefore, we perform the second-level graph attention to encode the hyponyms of its direct synsets to enrich the information of original synsets. Intuitively the second-level attention over the hyponyms can be viewed as a relational reasoning process. Because once a synset belongs to an entity type, its hyponyms always conform to the same type. Likewise, the second-level attention over C2(xi) is calculated: βijk ∝exp(cT jkW2hi) (5) where cj is the j-th synset linked to token xi and cjk the k-th hyponym of cj. So we can obtain the multi-hop knowledge state vector s2 i : s2 i = X j X k αijβijkcjk (6) Then we concat multi-level knowledge-aware vector s1 i , s2 i , and original BERT representation hi, and output fi = [s1 i , s2 i , hi]. We also add a BiLSTM matching layer which takes as input the knowledge-enriched representations fi. Then we forward the hidden states to a CRF layer and predict the final results. The training objective is the sum of log-likelihood of all the words. 3 Experiments 3.1 Setup Datasets To evaluate our approach, we conduct experiments on two public benchmark datasets, ATIS (T¨ur et al., 2010) and Snips (Coucke et al., 2018). ATIS contains 4,478 utterances in the training set and 893 utterances in the test set, while Snips contains 13,084 and 700 utterances, respectively. The percentage of OOV words between the training and test datasets is 0.77%(ATIS) and 5.95%(Snips). 622 Model ATIS Snips 1% 2% 5% 10% 50% 100% 1% 2% 5% 10% 50% 100% Attention-Based 3.59 22.91 48.16 63.33 88.51 94.21 20.94 30.58 43.74 50.92 78.46 87.80 Slot-Gated Full 4.91 20.08 53.01 77.07 94.19 94.80 18.24 25.03 51.91 64.51 84.45 88.88 Slot-Gated Intent 3.45 18.81 55.64 79.59 94.53 95.20 22.88 30.71 57.94 69.43 83.80 88.30 SF-ID Network 6.18 18.89 63.96 83.35 94.34 95.80 19.25 31.50 55.87 69.65 86.01 92.23 RNN 5.86 21.27 62.53 80.59 94.42 95.17 19.92 25.91 56.30 65.88 88.65 89.30 RNN+KB 6.75 23.35 63.55 81.40 95.04 95.63 23.64 28.92 58.88 68.22 90.40 90.81 BERT 73.67 80.84 88.09 91.06 95.08 95.98 69.49 76.87 86.34 90.01 94.26 95.17 BERT+KB 74.71 81.70 88.81 91.55 95.39 96.25 71.50 78.65 87.84 91.24 95.43 95.89 Table 2: Slot tagging performance on ATIS and Snips datasets. % represents how much training data we randomly choose from the original training set. We report the F1 scores on the same test sets. Samples in Snips are from different topics, such as getting weather and booking a restaurant, resulting in a larger vocabulary. By contrast, samples in ATIS are all about flight information with similar vocabularies across them. Therefore, Snips is much more complicated, mainly due to data diversity and the large vocabulary. The full statistics are shown in the Table 1. To simulate the few-shot scenarios, we downsample the original training sets of ATIS and Snips to different extents while keeping valid and test sets fixed. We aim to evaluate the effectiveness of integrating external KB under the settings of varied sizes of training data available. Evaluation We evaluate the performance of slot tagging using the F1 score metric. In the experiments, we use the English uncased BERT-base model, which has 12 layers, 768 hidden states, and 12 heads. The hidden size for the BiLSTM layer is set to 128. Adam (Kingma and Ba, 2014) is used for optimization with an initial learning rate of 1e-5. The dropout probability is 0.1, and the batch size is 64. We finetune all hyperparameters on the valid set. 3.2 Baselines Attention-Based (Liu and Lane, 2016) uses an RNN layer and a self-attention layer to encode the input text. Slot-Gated (Goo et al., 2018), which has two variants, Full Atten and Intent Atten, applies the information of intent detection task to enhance slot tagging. SF-ID Network (Haihong et al., 2019) designs a multiple iteration mechanism to construct bi-directional interrelated connections between slot tagging and intent detection. Most of the previous methods consider improving the performance of slot tagging by joint learning with intent detection. However, the effectiveness of background knowledge for slot tagging is still unexplored. Consequently, our proposed approach intends to integrate the large-scale lexical background knowledge, WordNet, to enhance the deep contextual representation of input text. We hope to further improve the performance of slot tagging, especially in the fewshot scenario where there is no plenty of training data available. 1 3.3 Overall Results We display the experiment results in Table 2, where we choose two model architectures RNN and BERT as the encoding layer. Table 2 shows that our proposed knowledge integration mechanism significantly outperforms the baselines for both datasets, demonstrating that explicitly integrating the largescale background knowledge and contextual representation can benefit slot tagging effectively. Moreover, the improvement of 0.72% over strong baseline BERT on Snips is considerably higher than 0.27% on ATIS. Considering the distinct complexity of the two datasets, the probable reason is that a simpler slot tagging task, such as ATIS, does not require much background knowledge to achieve good results. Because the vocabulary of ATIS is extremely smaller than that of Snips, therefore the context-aware models are capable of providing enough cues for recognizing rare or OOV words. Hence, our method makes a notable difference in a scenario where samples are linguistically diverse, and large vocab exists. The results also demonstrate that incorporating external knowledge will not bring in much noise since we use a knowledge sentinel for the better tradeoff between the impact of background knowledge and information from the context. On the other hand, the main results of the 1We do not choose (Williams, 2019) as a baseline since it only performs experiments on private industrial datasets and does not open source. We can hardly figure out the details of manually collecting lexicons from the dataset. 623 0.01 0.02 0.05 0.10 0.50 1.00 Train set size 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Relative F1 improvement (%) 1.41 1.06 0.82 0.54 0.33 0.28 2.89 2.32 1.74 1.37 1.24 0.76 ATIS SNIPS Figure 3: Relative F1 improvement over BERT baseline under the different sizes of training data. RNN-based models are 95.17(+0.46) on ATIS and 89.30(+1.51) on Snips, where the scores in the brackets are the absolute improvements arisen by KB. Compared to the BERT-based models, 95.98(+0.27) on ATIS and 95.17(+0.72) on Snips, the RNN-based model achieves more significant improvements in BERT-based models. We believe BERT can effectively transfer prior linguistic context constraints, so that background knowledge benefits RNN-based models more. BERT does improve the model’s ability to solve the OOV problem since it has learned linguistic knowledge from the large corpus. However, our method focuses more on the effect of using human-curated structured background knowledge and further enhances BERT in a distinct way. 4 Qualitative Analysis 4.1 Effect of Training Data Size Fig 3 shows the relative improvement percentages on ATIS and Snips using different sizes of training data. Results substantiate knowledge integration better facilitates few-shot slot tagging. This is because traditional context-aware models can not learn enough contextual semantics well while only given several samples. Explicit lexical relations become essentially necessary when there is not adequate training data, especially for rare words or OOV words. Background KB enables the model to reason explicit lexical relations and helps recognize rare and unseen words. Meanwhile, incorporating background knowledge can also enhance the original representation of BERT, which can provide direct lexical relations. Model ATIS Snips Full Model 91.55 91.24 - w/o knowledge integration 91.20 90.22 - w/o the second-level graph attention 91.46 90.87 - w/o matching layer 91.42 91.05 - w/o CRF 91.38 90.96 Table 3: Ablation analysis under the 10% training data setting. 4.2 Ablation Study To study the effect of each component of our method, we conduct ablation analysis under the 10% training data setting (Table 3). We can see that knowledge integration is crucial to the improvements. Besides, the first-level graph attention acquires better performance gain than the secondlevel attention. We assume that directly linked synsets are more significant than the hyponyms. The matching layer and CRF also play a role. The reason why the RNN matching layer matters is partly to build explicit interactions between knowledge vectors and context vectors. 5 Conclusion We present a novel knowledge integration mechanism of incorporating background KB and deep contextual representations to facilitate the few-shot slot tagging task. Experiments confirm the effectiveness of modeling explicit lexical relations, which has not yet been explored by previous works. Moreover, we find that our method delivers more benefits to data scarcity scenarios. We hope to provide new guidance for the future slot tagging work. Acknowledgments The authors would like to thank the reviewers for their valuable comments. This work was partially supported by National Key R&D Program of China No. 2019YFF0303300 and Subject II No. 2019YFF0303302, DOCOMO Beijing Communications Laboratories Co., Ltd, MoE-CMCC ”Artifical Intelligence” Project No. MCM20190701. References Alice Coucke, Alaa Saade, Adrien Ball, Th´eodore Bluche, Alexandre Caulier, David Leroy, Cl´ement Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al. 2018. Snips voice platform: an embedded spoken language understanding 624 system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT. Fabrice Dugas and Eric Nichols. 2016. Deepnnner: Applying blstm-cnns and extended lexicons to named entity recognition in tweets. In Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT), pages 178–187. Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and YunNung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 753–757. E Haihong, Peiqing Niu, Zhongfu Chen, and Meina Song. 2019. A novel bi-directional interrelated model for joint intent detection and slot filling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5467– 5471. Keqing He, Weiran Xu, and Yuanmeng Yan. 2020. Multi-level cross-lingual transfer learning with language shared and specific knowledge for spoken language understanding. IEEE Access, 8:29407– 29416. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Bing Liu and Ian Lane. 2015. Recurrent neural network structured output prediction for spoken language understanding. In Proc. NIPS Workshop on Machine Learning for Spoken Language Understanding and Interactions. Bing Liu and Ian Lane. 2016. Attention-based recurrent neural network models for joint intent detection and slot filling. Interspeech 2016. Gr´egoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, et al. 2014. Using recurrent neural networks for slot filling in spoken language understanding. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(3):530–539. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39– 41. Avik Ray, Yilin Shen, and Hongxia Jin. 2018. Robust spoken language understanding via paraphrasing. Interspeech 2018. Darsh Shah, Raghav Gupta, Amir Fayazi, and Dilek Hakkani-Tur. 2019. Robust zero-shot cross-domain slot filling with example values. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. G¨okhan T¨ur, Dilek Z. Hakkani-T¨ur, and Larry Heck. 2010. What is left to be understood in atis? 2010 IEEE Spoken Language Technology Workshop, pages 19–24. Kyle Williams. 2019. Neural lexicons for slot tagging in spoken language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pages 83–89. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. An Yang, Quan Wang, Jing Liu, Kai Liu, Yajuan Lyu, Hua Wu, Qiaoqiao She, and Sujian Li. 2019. Enhancing pre-trained language representations with rich knowledge for machine reading comprehension. In ACL. Bishan Yang and Tom M. Mitchell. 2017. Leveraging knowledge bases in lstms for improving machine reading. In ACL. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575.
2020
58
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6495–6504 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6495 Representation Learning for Information Extraction from Form-like Documents Bodhisattwa Prasad Majumder†♣Navneet Potti♠Sandeep Tata♠ James B. Wendt♠Qi Zhao♠Marc Najork♠ ♣Department of Computer Science and Engineering, UC San Diego [email protected] ♠Google Research, Mountain View {navsan, tata, jwendt, zhaqi, najork}@google.com Abstract We propose a novel approach using representation learning for tackling the problem of extracting structured information from form-like document images. We propose an extraction system that uses knowledge of the types of the target fields to generate extraction candidates, and a neural network architecture that learns a dense representation of each candidate based on neighboring words in the document. These learned representations are not only useful in solving the extraction task for unseen document templates from two different domains, but are also interpretable, as we show using loss cases. 1 Introduction In this paper, we present a novel approach to the task of extracting structured information from formlike documents using a learned representation of an extraction candidate. Form-like documents like invoices, purchase orders, tax forms and insurance quotes are common in day-to-day business workflows, but current techniques for processing them largely still employ either manual effort or brittle and error-prone heuristics for extraction. The research question motivating our work is the following: given a target set of fields for a particular domain – e.g., due date and total amount for invoices – along with a small set of manually-labeled examples, can we learn to extract these fields from unseen documents? Take, for instance, the domain of invoices, a document type that large enterprises often receive and process thousands of times every week (iPayables, 2016). Invoices from different vendors often present the same types of information but with different layouts and positioning. Figure 1 shows the headers of invoices from a few different vendors †Work done during an internship at Google Research Figure 1: Excerpts from sample invoices from different vendors. Instances of the invoice_date field are highlighted in green. showing the invoice date (highlighted in green) and number in different layouts. Furthermore, invoices from the same supplier even share similar presentation and differ only in specific values. We refer to this unit of visual pattern that is similar across a collection of documents as a template, and the fields of information that are common across templates in a domain as the schema. The schema consists of fields like invoice_date and total_amount, each associated with a type like date and currency. Extracting values for these fields from a given document, particularly one belonging to an unseen template, is a challenging problem for many reasons. In contrast to most prior work on information extraction (Sarawagi, 2008), templatic documents do not contain much prose. Approaches that work well on natural text organized in sentences cannot be applied directly to such documents where spatial layout elements like tables and grid formatting are commonplace. Understanding spatial relationships is critical for achieving good extraction performance on such documents. Moreover, these documents are usually in PDF or scanned image formats, so these presentation hints are not explicitly available in a markup language. Techniques that are successful on HTML documents such as 6496 web pages, including traditional wrapper induction approaches (Dalvi et al., 2011), are therefore not immediately applicable. Recently, there has been a surge in research interest in solving this extraction task adapting techniques in natural language processing (Liu et al., 2019), computer vision (Davis et al., 2019), or combinations thereof (Katti et al., 2018). In contrast to this body of work, we propose an approach based on representation learning for this task. We first generate extraction candidates for each target field using its associated type (e.g., all dates as candidates for invoice_date). We then use a neural network model to learn a dense representation for each extraction candidate independent of the field to which it belongs. We also learn a separate representation for the field itself, and use the similarity between the candidate and field representations to score the candidate according to how likely it is to be the true extraction value for that field. The design of our extraction system rests on a few observations about how information is often laid out in form-like documents (see Section 2). An advantage of our representation learning approach is that it allows us to encode certain priors we developed based on these observations into the architecture of the neural network and its input features (see Section 4). In fact, our experiments show that our proposed neural architecture outperforms a more naive MLP baseline using the same input features by about 10 F1 points on the extraction task for two different domains (see Section 6). Furthermore, the learned candidate representations are also meaningful and lend themselves to interpretation, as we show by delving into some loss cases. 2 Observations about Forms We make three key observations about form-like documents that inform our design. Observation 1 Each field often corresponds to a well-understood type. For example, the only likely extraction candidates for the invoice_date field in an invoice are instances of dates. A currency amount like $25.00 would clearly be incorrect. Since there are orders of magnitude fewer dates on an invoice as there are text tokens, limiting the search space by type dramatically simplifies the problem. Consequently, we use a library of detectors for several common types such as dates, currency amounts, integers, address portals, emails addresses, etc. to generate candidates. Observation 2 Each field instance is usually associated with a key phrase that bears an apparent visual relationship with it. Consider the invoice excerpt in Figure 1(c). It contains two date instances, only one of which is the true invoice_date, as indicated by the word “Date” next to it. Similarly, in the bottom-right invoice excerpt, we are easily able to distinguish between the invoice number (indicated by “Invoice #”) and the purchase order number (indicated by “PO #”). We call such indicative words key phrases. Proximity is not the only criterion that defines a key phrase. For instance, the word “Date” is not the nearest one to the true invoice_date instance in Figure 1(c); the document number in the line above and the page number below are clearly closer. It is also not the case that the key phrase always occurs on the same line; Figure 1(a) shows a case where the key phrase “DATE” occurs just above the true invoice_date. An effective solution needs to combine the spatial information along with the textual information. Fortunately, in our experience, these spatial relationships exhibit only a small number of variations across templates, and these tend to generalize across fields and domains. Observation 3 Key phrases for a field are largely drawn from a small vocabulary of field-specific variants. In a corpus of invoices we collected, we observed that, as exemplified by the samples in Figure 1, about 93% of the nearly 8400 invoice date instances were associated with key phrases that included the words “date” or “dated” and about 30% included “invoice”. Only about 7% of invoice dates had neither of these words in their key phrases. Similarly, 87% of the nearly 2800 due_date instances in our corpus had key phrases that contained the word “due” and 81% contained “date”. We found similar patterns for all other fields we investigated. The fact that there are only a small number of field-specific key phrases suggests that this problem may be tractable with modest amounts of training data. While these observations are applicable to many fields across different document types, there are several exceptions which we plan to tackle in future work. 3 Extraction Pipeline We leveraged the observations laid out in Section 2 to build a system to solve the information extraction task for form-like documents. Given a document 6497 and a target schema, we generate extraction candidates for each field from the document text using the field type. We then score each candidate independently using a neural scoring model. Finally, we assign at most one scored candidate as an extraction result for each field. We discuss the stages of this pipeline here, and delve into the architecture of the scoring model in Section 4. 3.1 Ingestion Our system can ingest both native digital as well as scanned documents. We render each document to an image and use a cloud OCR service1 to extract all the text in it. The text in the OCR result is arranged in the form of a hierarchy with individual characters at the leaf level, and words, paragraphs and blocks respectively in higher levels. The nodes in each level of the hierarchy are associated with bounding boxes represented in the 2D Cartesian plane of the document page. The words in a paragraph are arranged in reading order, as are the paragraphs and blocks themselves. 3.2 Candidate Generation In Section 2, we made the observation that fields in our target schema correspond to well-understood types like dates, integers, currency amounts, addresses, etc. There are well-known techniques to detect instances of these types in text, ranging from regular expression matching and heuristics to sequence labeling using models trained on web data. We associate each field type supported by our system with one or more candidate generators. These generators use a cloud-based entity extraction service2 to detect spans of the OCR text extracted from the documents that are instances of the corresponding type. For example, every date in an invoice becomes a candidate for every date field in the target schema, viz. invoice_date, due_date and delivery_date. Since the recall of the overall extraction system cannot exceed that of the candidate generators, it is important that their recall be high. Precision is, however, largely the responsibility of the scorer and assigner. 3.3 Scoring and Assignment Given a set of candidates from a document for each field in the target schema, the crux of the extraction 1cloud.google.com/vision 2cloud.google.com/natural-language task is to identify the correct extraction candidate (if any) for each field. While there are many approaches one could take to solve this problem, we made the design choice to break it down to two steps: first, we compute a score ∈[0, 1] for each candidate independently using a neural model, then we assign to each field the scored candidate that is most likely to be the true extraction for it. This separation of scoring and assignment allows us to learn a representation for each candidate based only on its neighborhood, independently of other candidates and fields. It also frees us to encode arbitrarily complex business rules into the assigner if required, for example, that the due date for an invoice cannot (chronologically) precede its invoice date, or that the line item prices must sum up to the total. For brevity, we omit the details of the assignment module and report results using a simple assigner that chooses the highest-scoring candidate for each field independently of other fields. 4 Neural Scoring Model The scoring module takes as input the target field from the schema and the extraction candidate to produce a prediction score ∈[0, 1]. While the downstream assignement module consumes the scores directly, the scorer is trained and evaluated as a binary classifier. The target label for a candidate is determined by whether the candidate matches the ground truth for that document and field. An important desideratum for us in the design of the scorer is that it learns a meaningful candidate representation. We propose an architecture where the model learns separate embeddings for the candidate and the field it belongs to, and where the similarity between the candidate and field embeddings determines the score. We believe that such an architecture allows a single model to learn candidate representations that generalize across fields and document templates. We can conceptualize the learned representation of a candidate as encoding what words in its neighborhood form its associated key phrase since, apropos Observation 2, the spatial relationships between candidates and their key phrases are observed to generalize across fields. On the other hand, the embedding for a field can be conceptualized as encoding the key phrase variants that are usually indicative of it, apropos Observation 3. 6498 Figure 2: Neighbor ‘Invoice’ for invoice_date candidate with relative position (−0.06, −0.01). 4.1 Candidate features We would like our model to learn a representation of a candidate that captures its neighborhood. Accordingly, the essential features of a candidate are the text tokens that appear nearby, along with their positions. We use a simple heuristic to determine what OCR text tokens we consider to be the neighbors of a given candidate: we define a neighborhood zone around the candidate extending all the way to the left of the page and about 10% of the page height above it. Any text tokens whose bounding boxes overlap by more than half with the neighborhood zone is considered to be a neighbor. As shown in Figure 2, we represent the position of a candidate and each of its neighbors using the 2-D Cartesian coordinates of the centroids of their respective bounding boxes. These coordinates are normalized by dividing by the corresponding page dimensions so that the features are independent of the pixel resolution of the input documents. We calculate the relative position of a neighbor as the difference between its normalized 2-D coordinates and those of the candidate. An additional feature we found to be helpful is the absolute position of the candidate itself. An important design choice we made is to not incorporate the candidate text into the input. Note that this text was already the basis for generating the candidate in the first place. Withholding this information from the input to the model avoids accidental overfitting to our somewhat-small training datasets. For instance, since the invoices we collected were all dated prior to 2019, it is possible that providing the date itself as input to the model could cause it to learn that true invoice_date instances always occur prior to 2019. 4.2 Embeddings As shown in Figure 3 (a)-(d), we embed each of the candidate features separately in the following ways. Figure 3: Neural Scoring Model. Pos. = Positional, Cand. = Candidate, Embed. = Embedding The neighboring text tokens are embedded using a word embedding table. Each neighbor relative position is embedded through a nonlinear positional embedding consisting of two ReLU-activated layers with dropout. This nonlinear embedding allows the model to learn to resolve fine-grained differences in position, say between neighbors sharing the same line as the candidate and those on the line above. The candidate position feature is embedded using just a linear layer. We also use an embedding table for the field to which a candidate belongs. In a model with embedding dimension d, the sizes of each neighbor’s word and position embeddings are set to be d. We experimented with different sizes for the word and position embeddings, but it did not make a significant difference. For simplicity of exposition, we use the same value for both. Since each candidate is padded to have the same number of neighbors, say N, we denote the neighbor embeddings {h1, h2, . . . , hN}, with each hi ∈R2d. We also set the sizes of the candidate position embedding as well as the field embedding to be d. Neighbor Encodings It is important to note that the initial neighbor embeddings hi (Figure 3 (d)) are independent of each other. In order to capture interactions between neighbors, we employ self-attention (Vaswani et al., 2017), allowing each neighbor to have its embedding affected by all others. This is useful, for example, for the model to downweight a neighbor that has other neighbors between itself and the candidate. We pack the neighbor embeddings hi into a matrix H ∈RN×2d, then transform these em6499 bdeddings into query, key and value embeddings through three different linear projection matrices Wq, Wk and Wv ∈R2d×2d. qi = hiWq K = HWk V = HWv For each neighbor i, its query embedding qi and the key embeddings K are used to obtain the attention weight vector αi ∈RN as follows. αi = Softmax Ç qiKT √ 2d å The self-attended neighbor encoding ˜hi ∈R2d (see Figure 3(e)) for neighbor i is a linear combination of the value embeddings, V ∈RN×2d, using the above attention weights for all the neighbors ˜hi = αiV . As in Vaswani et al. (2017), we use a normalization constant of √ 2d to improve stability. We project the self-attended neighbor encodings to a larger 4 × 2d dimensional space using a linear projection with ReLU nonlinearity, and then project them back to 2d. 4.3 Candidate Encoding We combine the N neighbor encodings of size 2d each to form a single encoding of size 2d for the entire neighborhood. Since we already capture information about the relative positions of the neighbors with respect to the candidates in the embeddings themselves, it is important to ensure that the neighborhood encoding is invariant to the (arbitrary) order in which the neighbors are included in the features. Our experiments indicate that maxpooling the neighbor encodings together was the best strategy, slightly beating out mean-pooling. Next, we obtain a candidate encoding (see Figure 3(f, h, i)) by concatenating the neighborhood encoding ∈R2d with the candidate position embedding ∈Rd and projecting (through a ReLUactivated linear layer) back down to d dimensions. Candidate Scoring The candidate encoding is expected to contain all relevant information about the candidate, including its position and its neighborhood. By design, it is independent of the field to which said candidate belongs. This neural network is, however, trained as a binary classifier to score a candidate according to how likely it is to be the true extraction value for some field and document. Drawing inspiration from prior work in metric learning (Kulis, 2013), given a field with embedding f ∈Rd and its candidate with encoding c ∈ Corpus Split # Docs # Templates Invoices1 Train 11,390 11,390 Validation 2,847 2,847 Invoices2 Test 595 595 Receipts Train 237 141 Validation 71 47 Test 170 46 Table 1: Invoices and Receipts corpora Rd, we compute CosineSimilarity(c, f) ∈[−1, 1]. Finally, the model’s prediction is simply a (constant) linear rescaling of this similarity so that the scores lie in [0, 1]. The model is trained using binary cross entropy between this prediction and the target label as the loss function. Intuitively, this architecture ensures that the positive candidates for a field cluster together near its field embedding, and that these clusters are set far apart from each other. We use TSNE (Maaten and Hinton, 2008) to visualize this phenomenon in Section 6.2. 5 Datasets To analyze the performance of our model, we used datasets belonging to two different domains, summarized in Table 1. Invoices We collected two corpora of invoices from different sources. The first corpus, Invoices1, contains 14,237 single-page invoices. Each invoice was from a different vendor, so the documents do not share any common templates. Documents from the same vendor are generated from the same template. The second corpus, Invoices2, contains 595 documents belonging to different templates, with no templates in common with Invoices1. In all of our experiments, we used a 60-40 split of templates in Invoices1 as our training and validation sets, and all the templates in Invoices2 as our test set. We asked human annotators to provide us ground truth extraction results for the fields shown in Table 2. The candidate generator associated with each field type was used to generate examples, which were then labeled using the ground truth. About 95% of documents and fields present the training set had at least one positive example produced by our candidate generators. The fieldlevel recall of our candidate generators varies from about 87% for invoice_id to about 99% for invoice_date. Improving the recall of candidate generators is part of our ongoing effort. 6500 While the candidate generators have reasonably high recall, their precision varies dramatically from field to field. For common fields like invoice_date and total_amount that are present in nearly all documents, we generate fewer than ten negatives for each positive example. On the other hand, for rare fields like total_tax_amount as well as for fields with low-precision candidate generators such as the alphanum candidate generator for purchase_order, there can sometimes be dozens of negatives for each positive. Overall, since the negatives far outnumber the positives, we found it helpful to randomly downsample negatives in the training set to keep at most 40 negatives for each positive per field. The negatives in the validation and test sets were not downsampled. We created a vocabulary of the 512 most frequent tokens, case-normalized, taken from the OCR text of the documents in Invoices1. The vocabulary also includes special tokens for numbers ([NUMBER]), out-of-vocabulary tokens ([RARE]) and padding ([PAD]). Despite the small size of this vocabulary, it covered at least 95% of words that occurred in key phrases across the entire corpus where excluded words were usually OCR errors. Receipts We also evaluated our model using a publicly-available corpus of scanned receipts published as part of the ICDAR 2019 Robust Reading Challenge on Scanned Receipts OCR and Information Extraction3. This corpus contains 626 receipt images with ground truth extraction results for four fields, viz., address, company, date and total. Using the company annotation as the template mapping, we found that these documents belong to 234 templates. The largest template contains 46 receipts and about half the documents belong to 13 templates with more than 10 documents each. On the other hand, nearly 70% of templates only have a single document. In all of our experiments, we used a 60-20-20 split of templates as our training, validation and test sets respectively, sampling at most 5 documents from each template. Our target schema for this extraction task consists of the date and total fields. We generated labeled examples for these two fields using a vocabulary created as above from the 512 most frequent terms in the OCR text of the receipts. The fields in this dataset did not suffer from the label imbalance problem highlighted above for invoices. 3rrc.cvc.uab.es/?ch=13 6 Experiments In this section, we evaluate our scoring model with respect to our two key desiderata. First, in Section 6.1, we show that our model is able to help the extraction system generalize to unseen templates. Then, in Section 6.2, we probe the model to show that it learns meaningful internal representations. In the experiments described below, we trained models using the Rectified Adam (Liu et al., 2020) optimizer with a learning rate of 0.001 for 50 epochs. For both the Invoices and Receipts datasets described in Section 5, we used the training split to train the model, the validation split to pick the model with the best hold-out loss, and the test split to report performance metrics. 6.1 Generalization to unseen templates We measured the performance of our model’s scoring predictions using ROC AUC on the test split. We also analyzed its performance in the context of the overall extraction system using the accuracy of the end-to-end extraction results as measured by the maximum F1 score over all decision thresholds, averaged across all fields in the target schema shown in Table 2. To demonstrate the benefits of our proposed neural architecture over a naive approach, we use two different baseline models for encoding a candidate and scoring it. The bag-of-words BoW baseline incorporates only the neighboring tokens of a candidate, but not their positions. The MLP baseline uses the same input features as our proposed model, including the relative positions of the candidate’s neighbors, and encodes the candidate using 3 hidden layers. Both these baselines follow our representation learning approach, encoding the candidate and the field separately. Just as in our model, the final score is the cosine distance between the candidate and field encodings, normalized to [0, 1] using a sigmoid. We chose the dimension size for each model architecture using a grid-based hyperparameter search. All the metrics we report were obtained from performing 10 training runs and picking the model with the best validation ROC AUC. Table 2 summarizes the results of this performance comparison. On both our evaluation datasets, our model showed a significant improvement over the baselines by both metrics. For the invoice corpus, our model outperforms the BoW baseline by about 1 point in the scorer ROC AUC, 6501 Corpus Field Field Type Train Test Scorer ROC AUC End-to-End Max F1 # +ves % +ves BoW MLP Ours BoW MLP Ours Invoices amount_due currency 5,930 4.8% 0.967 0.968 0.973 0.800 0.789 0.801 due_date date 5,788 12.9% 0.977 0.973 0.984 0.835 0.850 0.861 invoice_date date 13,638 57.4% 0.983 0.986 0.986 0.933 0.939 0.940 invoice_id alphanum 13,719 6.8% 0.983 0.988 0.993 0.913 0.937 0.949 purchase_order alphanum 13,262 2.2% 0.959 0.967 0.976 0.826 0.851 0.896 total_amount currency 8,182 12.5% 0.966 0.972 0.980 0.834 0.849 0.858 total_tax_amount currency 2,949 7.5% 0.975 0.967 0.980 0.756 0.812 0.839 Macro-average 14.9% 0.973 0.974 0.982 0.842 0.861 0.878 Receipts date date 258 85.5% 0.748 0.792 0.737 0.885 0.885 0.854 total currency 475 16.7% 0.834 0.796 0.889 0.631 0.607 0.813 Macro-average 51.1% 0.791 0.794 0.813 0.758 0.746 0.833 Table 2: Performance on the test set of unseen templates for Invoices and Receipts. The best-performing architecture in each case is highlighted. which translates to about 3.6 points improvement in the end-to-end Max F1. In fact, our model beats the baseline in every field in our invoice target schema as well. This difference in performance clearly demonstrates the need to incorporate token positions to extract information accurately from form-like documents. Using neighbor position information, the MLP baseline is able to outperform the BoW baseline as well, but the improvement in end-to-end Max F1 is only about 2 points. This result demonstrates that our proposed architecture is better able to encode position information than a naive MLP. Similarly, for the receipt corpus also, our model outperforms both the baselines. The improvement is much larger for the total field, more than 20 points. For the date field, since there are too few negative candidates in the dataset, all the models have comparable performance end-to-end. A close examination of the per-field performance metrics in Table 2 reveals that model performance is greatly affected by both the number of positive training candidates, as well as by the ratio of positives to negatives. The best performance is observed for fields that occur frequently in invoices (e.g., invoice_id) and where the candidate generator emits only a small number of negatives for each positive (e.g., invoice_date). Conversely, the fields that are hardest to extract are those that are relatively rare and have low-precision candidate generators, viz., amount_due and total_tax_amount. We also studied our model performance over various ablation setups and found that the relative order in which various features influence generalization performance is: neighbor text > candidate position > neighbor position. This result is also borne out by the fact that the BoW baseline, which omits the last of these features, is quite competitive with the other approaches. We also compared the performance of our proposed architecture with and without the selfattention layer applied to the neighbor encodings. We found that self-attention contributes greatly to model performance for the invoice corpus: not only did self-attention lead to a 1-point improvement in scorer ROC AUC and a 1.7 point improvement in end-to-end max F1, we also observed an improvement in every single field in our invoice schema. 6.2 Meaningful internal representations We investigated the internal representations learned by our model by visualizing their 2-D projections using TSNE. Figure 4(a) shows the representations learned for date candidates. They are colored based on the ground truth data indicating if they belong to one of invoice_date, due_date, or delivery_date. The learned encodings clearly show three distinct (by color) coherent clusters matching the respective field labels. Figure 4(b) shows the candidate encodings for a sample of positive and negative date candidates for the invoice_date field, along with the embedding for that field. It is apparent that the encodings of the positive examples are largely clustered together whereas the sampled negatives show a more uniform and sparse spatial distribution. Furthermore, the field embedding lies close to the cluster of positive examples. It is interesting to note that the field embedding lies not at the center of the cluster, but rather at its edge, as far away as possible from the clusters of positive examples for other 6502 Figure 4: TSNE visualizations for (a) positive candidate encodings for the date fields in the target schema for invoices, and (b) positive and negative candidate encodings for invoice_date field as well as its field embedding. (c), (d) and (e) show three cases of misclustered candidate encodings fields. This pattern is predicted by the fact that the loss function is essentially trying to minimize the cosine distance between the field embedding and its positives, while maximizing its distance from its negatives, most importantly the positives for the other fields. We also indicate three cases of misclustered candidate encodings in Figure 4(a), whose corresponding invoice candidates and their neighborhoods are excerpted below. Figure 4(c) shows a ground truth positive invoice_date example whose encoding is far from the invoice_date cluster. It is clear from examining the invoice that this is an error in the ground truth labels provided by the human annotator. In fact, this date is the date of purchase and not the invoice date. The candidate shown in Figure 4(d) has a candidate encoding that lies midway between due_date, its true label, and invoice_date. We believe this is explained by the fact that this date has both the terms “Due Date” and “date of invoice” nearby, which are usually indicative of due_date and invoice_date respectively. Finally, Figure 4(e) shows a true invoice_date example whose encoding is far away from all the field clusters. A closer examination of the features of this candidate showed that our OCR engine was unable to detect the word “Date” just above the date due to scanning noise. Since this crucial word was missing from the neighbors of this candidate, the learned neighborhood representation was clearly incorrect. 7 Related Work Information extraction from plain text documents for tasks like named entity recognition and relation extraction have benefited from recent advances in deep learning (Lample et al., 2016; Peng et al., 2017). However, these techniques are not directly applicable to our task on form-like documents. Palm et al. (2017) attempts to use RNNs to extract information from form-like documents. However, they treat each line as a vector of n-grams limiting the resulting accuracy. The importance of understanding visual layout was recognized even in the context of information extraction of webpages in recent work (Cai et al., 2004; Yu et al., 2003; Zhu et al., 2006; Cai et al., 2003). The techniques developed by them are, however, not immediately applicable in our context since we do not have access to the source markup representation for the documents we deal with. A common approach to solving the problem of extracting information from form-like documents is to register templates in a system, match new documents to an existing template, and use an extractor learnt from said template (Chiticariu et al., 2013; Schuster et al., 2013). The learning problem we tackle in this paper is more ambitious; we seek to generalize to unseen templates. Our work is most closely related to recent attempts to combine layout features with text signals. Liu et al. (2019) use a document graph and intro6503 duce a graph combination model to combine visual and textual signals in the document. Katti et al. (2018) represent a document as a two-dimensional grid of text tokens. Zhao et al. (2019) show that using grid information can be useful for information extraction tasks. Denk and Reisswig (2019) combine the grid-based approach with BERT-based text encodings. While an apples-to-apples comparison with these approaches is difficult without a shared benchmark, our system has several advantages: in contrast to the graph-based approaches (Liu et al., 2019) we focus on the harder problem of generalizing to unseen templates rather than dealing with the variations within a template. Since we are not starting with raw pixels, our approach is computationally less expensive than grid-based approaches. Further, we do not require clever heuristics to construct a multi-scale grid that is required for the image-segmentation style abstraction to work well. To the best of our knowledge, our approach of using representation learning for this task is the first of its kind. We gain many of the well-known benefits of this approach (Bengio et al., 2013), most notably interpretability. 8 Conclusion and Future Work In this paper, we presented a novel approach to the task of extracting structured information from templatic documents using representation learning. We showed that our extraction system using this approach not only has promising accuracy on unseen templates in two different domains, but also that the learned representations lend themselves to interpretation of loss cases. In this initial foray into this challenging problem, we limited our scope to fields with domain-agnostic types like dates and numbers, and which have only one true value in a document. In future work, we hope to tackle repeated fields and learn domainspecific candidate generators. We are also actively investigating how our learned candidate representations can be used for transfer learning to a new domain and, ultimately, in a few-shot setting. Acknowledgements We are grateful to Lauro Costa, Evan Huang, Will Lu, Lukas Rutishauser, Mu Wang, and Yang Xu on the Google Cloud team for their support with data collection, benchmarking, and continuous feedback on our ideas. We are also grateful for our research intern, Beliz Gunel, who helped re-run several experiments and finetune our training pipeline. References Yoshua Bengio, Aaron C. Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. TPAMI, 35(8):1798–1828. Deng Cai, Shipeng Yu, Ji-Rong Wen, and Wei-Ying Ma. 2003. Extracting content structure for web pages based on visual representation. In Web Technologies and Applications, APWeb, pages 406–417. Deng Cai, Shipeng Yu, Ji-Rong Wen, and Wei-Ying Ma. 2004. Block-based web search. In SIGIR, pages 456–463. Laura Chiticariu, Yunyao Li, and Frederick R. Reiss. 2013. Rule-based information extraction is dead! long live rule-based information extraction systems! In EMNLP, pages 827–832. Nilesh Dalvi, Ravi Kumar, and Mohamed Soliman. 2011. Automatic wrappers for large scale web extraction. In VLDB, volume 4, pages 219–230. Brian L. Davis, Bryan S. Morse, Scott Cohen, Brian L. Price, and Chris Tensmeyer. 2019. Deep visual template-free form parsing. CoRR, abs/1909.02576. Timo I. Denk and Christian Reisswig. 2019. Bertgrid: Contextualized embedding for 2d document representation and understanding. CoRR, abs/1909.04948. iPayables. 2016. Why Automation Matters: A Survey Study of the Modern Accounts Payable Department. Technical report, iPayables. Anoop R. Katti, Christian Reisswig, Cordula Guder, Sebastian Brarda, Steffen Bickel, Johannes Höhne, and Jean Baptiste Faddoul. 2018. Chargrid: Towards understanding 2d documents. In EMNLP, pages 4459–4469. Brian Kulis. 2013. Metric learning: A survey. Foundations and Trends R⃝in Machine Learning, 5(4):287– 364. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL, pages 260–270. Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2020. On the variance of the adaptive learning rate and beyond. In ICLR. Xiaojing Liu, Feiyu Gao, Qiong Zhang, and Huasha Zhao. 2019. Graph convolution for multimodal information extraction from visually rich documents. In NAACL, pages 32–39. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. JMLR, 9(Nov):2579– 2605. 6504 Rasmus Berg Palm, Ole Winther, and Florian Laws. 2017. Cloudscan - A configuration-free invoice analysis system using recurrent neural networks. In ICDAR, pages 406–413. Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph lstms. TACL, 5:101–115. Sunita Sarawagi. 2008. Information extraction. Foundations and Trends R⃝in Databases, 1(3):261–377. Daniel Schuster, Klemens Muthmann, Daniel Esser, Alexander Schill, Michael Berger, Christoph Weidling, Kamil Aliyev, and Andreas Hofmeier. 2013. Intellix - end-user trained information extraction for document archiving. In ICDAR, pages 101–105. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 5998–6008. Shipeng Yu, Deng Cai, Ji-Rong Wen, and Wei-Ying Ma. 2003. Improving pseudo-relevance feedback in web information retrieval using web page segmentation. In WWW, pages 11–18. Xiaohui Zhao, Zhuo Wu, and Xiaoguang Wang. 2019. CUTIE: learning to understand documents with convolutional universal text information extractor. CoRR, abs/1903.12363. Jun Zhu, Zaiqing Nie, Ji-Rong Wen, Bo Zhang, and Wei-Ying Ma. 2006. Simultaneous record detection and attribute labeling in web data extraction. In KDD, pages 494–503.
2020
580
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6505–6514 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6505 Single-/Multi-Source Cross-Lingual NER via Teacher-Student Learning on Unlabeled Data in Target Language Qianhui Wu1, Zijia Lin2, B¨orje F. Karlsson2, Jian-Guang Lou2, and Biqing Huang1 1Beijing National Research Center for Information Science and Technology (BNRist) Department of Automation, Tsinghua University, Beijing 100084, China [email protected], [email protected] 2Microsoft Research, Beijing 100080, China {zijlin,borje.karlsson,jlou}@microsoft.com Abstract To better tackle the named entity recognition (NER) problem on languages with little/no labeled data, cross-lingual NER must effectively leverage knowledge learned from source languages with rich labeled data. Previous works on cross-lingual NER are mostly based on label projection with pairwise texts or direct model transfer. However, such methods either are not applicable if the labeled data in the source languages is unavailable, or do not leverage information contained in unlabeled data in the target language. In this paper, we propose a teacher-student learning method to address such limitations, where NER models in the source languages are used as teachers to train a student model on unlabeled data in the target language. The proposed method works for both single-source and multi-source crosslingual NER. For the latter, we further propose a similarity measuring method to better weight the supervision from different teacher models. Extensive experiments for 3 target languages on benchmark datasets well demonstrate that our method outperforms existing state-of-theart methods for both single-source and multisource cross-lingual NER. 1 Introduction Named entity recognition (NER) is the task of identifying text spans that belong to pre-defined categories, like locations, person names, etc. It’s a fundamental component in many downstream tasks, and has been greatly advanced by deep neural networks (Lample et al., 2016; Chiu and Nichols, 2016; Peters et al., 2017). However, these approaches generally require massive manually labeled data, which prohibits their adaptation to lowresource languages due to high annotation costs. One solution to tackle that is to transfer knowledge from a source language with rich labeled data to a target language with little or even no labeled ℳ𝑠𝑟𝑐 ℳ𝑡𝑔𝑡 Directly apply (i.e., ℳ𝑡𝑔𝑡= ℳ𝑠𝑟𝑐) {𝑋, 𝑌}𝑠𝑟𝑐 {𝑋′}𝑡𝑔𝑡 ℳ𝑡𝑔𝑡 Pairwise Relation ℳ𝑠𝑟𝑐 {𝑋′}𝑡𝑔𝑡 ℳ𝑡𝑔𝑡 (b) (a) (c) {𝑋′, 𝑌′}𝑡𝑔𝑡 {𝑋′, 𝑃′}𝑡𝑔𝑡 Training Training Figure 1: Comparison between previous cross-lingual NER methods (a/b) and the proposed method (c). (a): direct model transfer; (b): label projection with pairwise texts; (c): proposed teacher-student learning method. Msrc/tgt: learned NER model for source/target language; {X, Y }src: labeled data in source language; {X′}tgt: unlabeled data in target language; {X′, Y ′}tgt/{X′, P ′}tgt: pseudo-labeled data in target language with hard labels / soft labels. data, which is referred to as cross-lingual NER (Wu and Dredze, 2019; Wu et al., 2020). In this paper, following Wu and Dredze (2019) and Wu et al. (2020), we focus on the extreme scenario of crosslingual NER where no labeled data is available in the target language, which is challenging in itself and has attracted considerable attention from the research community in recent years. Previous works on cross-lingual NER are mostly based on label projection with pairwise texts or direct model transfer. Label-projection based methods focus on using labeled data in a source language to generate pseudo-labelled data in the target language for training an NER model. For example, Ni et al. (2017) creates automatically labeled NER data for the target language via label projection on comparable corpora and develops a heuristic scheme to select good-quality projection-labeled data. Mayhew et al. (2017) and Xie et al. (2018) 6506 translate the source language labeled data at the phrase/word level to generate pairwise labeled data for the target language. Differently, model-transfer based methods (Wu and Dredze, 2019; Wu et al., 2020) focus on training a shared NER model on the labeled data in the source language with languageindependent features, such as cross-lingual word representations (Devlin et al., 2019), and then directly testing the model on the target language. However, there are limitations in both labelprojection based methods and model-transfer based methods. The former relies on labeled data in the source language for label projection, and thus is not applicable in cases where the required labeled data is inaccessible (e.g., due to privacy/sensitivity issues). Meanwhile, the later does not leverage unlabeled data in the target language, which can be much cheaper to obtain and probably contains very useful language information. In this paper, we propose a teacher-student learning method for cross-lingual NER to address the mentioned limitations. Specifically, we leverage multilingual BERT (Devlin et al., 2019) as the base model to produce language-independent features. A previously trained NER model for the source language is then used as a teacher model to predict the probability distribution of entity labels (i.e., soft labels) for each token in the non-pairwise unlabeled data in the target language. Finally, we train a student NER model for the target language using the pseudo-labeled data with such soft labels. The proposed method does not rely on labelled data in the source language, and it also leverages the available information from unlabeled data in the target language, thus avoiding the mentioned limitations of previous works. Note that we use the teacher model to predict soft labels rather than hard labels (i.e., one-hot labelling vector), as soft labels can provide much more information (Hinton et al., 2015) for the student model. Figure 1 shows the differences between the proposed teacher-student learning method and the typical label-projection or model-transfer based methods. We further extend our teacher-student learning method to multi-source cross-lingual NER, considering that there are usually multiple source languages available in practice and we would prefer transferring knowledge from all source languages rather than a single one. In this case, our method still enjoys the same advantages in terms of data availability and inference efficiency, compared with existing works (T¨ackstr¨om, 2012; Chen et al., 2019; Enghoff et al., 2018; Rahimi et al., 2019). Moreover, we propose a method to measure the similarity between each source language and the target language, and use this similarity to better weight the supervision from the corresponding teacher model. We evaluate our proposed method for 3 target languages on benchmark datasets, using different source language settings. Experimental results show that our method outperforms existing state-of-the-art methods for both single-source and multi-source cross-lingual NER. We also conduct case studies and statistical analyses to discuss why teacher-student learning reaches better results. The main contributions of this work are: • We propose a teacher-student learning method for single-source cross-lingual NER, which addresses limitations of previous works w.r.t data availability and usage of unlabeled data. • We extend the proposed method to multisource cross-lingual NER, using a measure of the similarities between source/target languages to better weight teacher models. • We conduct extensive experiments validating the effectiveness and reasonableness of the proposed methods, and further analyse why they attain superior performance. 2 Related Work Single-Source Cross-Lingual NER: Such approaches consider one single source language for knowledge transfer. Previous works can be divided into two categories: label-projection and modeltransfer based methods. Label-projection based methods aim to build pseudo-labeled data for the target language to train an NER model. Some early works proposed to use bilingual parallel corpora and project model expectations (Wang and Manning, 2014) or labels (Ni et al., 2017) from the source language to the target language with external word alignment information. But obtaining parallel corpora is expensive or even infeasible. To tackle that, recent methods proposed to firstly translate source-language labeled data at the phrase level (Mayhew et al., 2017) or word level (Xie et al., 2018), and then directly copy labels across languages. But translation introduces extra noise due to sense ambiguity and word order differences between languages, thus hurting the trained model. 6507 Model-transfer based methods generally rely on language-independent features (e.g., crosslingual word embeddings (Ni et al., 2017; Huang et al., 2019; Wu and Dredze, 2019; Moon et al., 2019), word clusters (T¨ackstr¨om et al., 2012), gazetteers (Zirikly and Hagiwara, 2015), and wikifier features (Tsai et al., 2016)), so that a model trained with such features can be directly applied to the target language. For further improvement, Wu et al. (2020) proposed constructing a pseudotraining set for each test case and fine-tuning the model before inference. However, these methods do not leverage any unlabeled data in the target language, though such data can be easy to obtain and benefit the language/domain adaptation. Multi-Source Cross-Lingual NER: Multisource cross-lingual NER considers multiple source languages for knowledge transfer. T¨ackstr¨om (2012) and Moon et al. (2019) concatenated the labeled data of all source languages to train a unified model, and performed cross-lingual NER in a direct model transfer manner. Chen et al. (2019) leveraged adversarial networks to learn language-independent features, and learns a mixture-of-experts model (Shazeer et al., 2017) to weight source models at the token level. However, both methods straightly rely on the availability of labeled data in the source languages. Differently, Enghoff et al. (2018) implemented multi-source label projection and studied how source data quality influence performance. Rahimi et al. (2019) applied truth inference to model the transfer annotation bias from multiple sourcelanguage models. However, both methods make predictions via an ensemble of source-language models, which is cumbersome and computationally expensive, especially when a source-language model has massive parameter space. Teacher-Student Learning: Early applications of teacher-student learning targeted model compression (Bucilu et al., 2006), where a small student model is trained to mimic a pre-trained, larger teacher model or ensemble of models. It was soon applied to various tasks like image classification (Hinton et al., 2015; You et al., 2017), dialogue generation (Peng et al., 2019), and neural machine translation (Tan et al., 2019), which demonstrated the usefulness of the knowledge transfer approach. Encoder Layer Linear Classification Layer Student Loss Function Gradient Back-Propagation Encoder Layer Linear Classification Layer Teacher Inference Training Unlabeled Target-Language Data Figure 2: Framework of the proposed teacher-student learning method for single-source cross-lingual NER. In this paper, we investigate teacher-student learning for the task of cross-lingual NER, in both single-source and multi-source scenarios. Different from previous works, our proposed method does not rely on the availability of labelled data in source languages or any pairwise texts, while it can also leverage extra information in unlabeled data in the target language to enhance the cross-lingual transfer. Moreover, compared with using an ensemble of source-language models, our method uses a single student model for inference, which can enjoy higher efficiency. 3 Methodology Named entity recognition can be formulated as a sequence labeling problem, i.e., given a sentence x = {xi}L i=1 with L tokens, an NER model is supposed to infer the entity label yi for each token xi and output a label sequence y = {yi}L i=1. Under the paradigm of cross-lingual NER, we assume there are K source-language models previously trained with language-independent features. Our proposed teacher-student learning method then uses those K source-language models as teachers to train an effective student NER model for the target language on its unlabeled data Dtgt. 3.1 Single-Source Cross-Lingual NER Here we firstly consider the case of only one source language (K = 1) for cross-lingual NER. The overall framework of the proposed teacher-student learning method for single-source cross-lingual NER is illustrated in Figure 2. 6508 3.1.1 NER Model Structure As shown in Figure 2, for simplicity, we employ the same neural network structure for both teacher (source-language) and student (target-language) NER models. Note that the student model is flexible and its structure can be determined according to the trade-off between performance and training/inference efficiency. Here the adopted NER model consists of an encoder layer and a linear classification layer. Specifically, given an input sequence x = {xi}L i=1 with L tokens, the encoder layer fθ maps it into a sequence of hidden vectors h = {hi}L i=1: h = fθ(x) (1) Here fθ(·) can be any encoder model that produces cross-lingual token representations, and hi is the hidden vector corresponding to the i-th token xi. With each hi derived, the linear classification layer computes the probability distribution of entity labels for the corresponding token xi, using a softmax function: p(xi, Θ) = softmax(Whi + b) (2) where p(xi, Θ) ∈R|C| with C being the entity label set, and Θ = {fθ, W, b} denotes the to-belearned model parameters. 3.1.2 Teacher-Student Learning Training: We train the student model to mimic the output probability distribution of entity labels by the teacher model, on the unlabeled data in the target language Dtgt. Knowledge from the teacher model is expected to transfer to the student model, while the student model can also leverage helpful language-specific information available in the unlabeled target-language data. Given an unlabeled sentence x′ ∈Dtgt in the target language, the teacher-student learning loss w.r.t x′ is formulated as the mean squared error (MSE) between the output probability distributions of entity labels by the student model and those by the teacher model, averaged over tokens. Note that here we follow Yang et al. (2019) and use the MSE loss, because it is symmetric and mimics all probabilities equally. Suppose that for the i-token in x′, i.e., x′ i, the probability distribution of entity labels output by the student model is denoted as ˆp(x′ i, ΘS), and that output by the teacher model as ˜p(x′ i, ΘT ). Here ΘS and ΘT , respectively, denote Gradient Back-Propagation Inference Training Unlabeled Target-Language Data Student Θ𝑆 Loss Function Teacher Θ𝑇 (𝐾) . . . ⨀ 𝛼𝐾 Teacher Θ𝑇 (1) ⨀ 𝛼1 ⨁ Figure 3: Framework of the proposed teacher-student learning method for multi-source cross-lingual NER. the parameters of the student and the teacher models. The teacher-student learning loss w.r.t x′ is then defined as: L(x′, ΘS) = 1 L L X i=1 MSE ˆp(x′ i, ΘS), ˜p(x′ i, ΘT )  (3) And the whole training loss is the summation of losses w.r.t all sentences in Dtgt, as defined below. L(ΘS) = X x′∈Dtgt L(x′, ΘS) (4) Minimizing L(ΘS) will derive the student model. Inference: For inference in the target language, we only utilize the learned student model to predict the probability distribution of entity labels for each token xi in a test sentence x. Then we take the entity label c ∈C with the highest probability as the predicted label yi for xi: yi = arg max c ˆp(xi, ΘS)c (5) where p(xi, ΘS)c denotes the predicted probability corresponding to the entity label c in p(xi, ΘS). 3.2 Multi-Source Cross-Lingual NER The framework of the proposed teacher-student learning method for multi-source (K > 1) crosslingual NER is illustrated in Figure 3. 3.2.1 Extension to Multiple Teacher Models As illustrated in Figure 3, we extend the singleteacher framework in Figure 2 into a multi-teacher one, while keeping the student model unchanged. Note that, for simplicity, all teacher models and the student model use the same model structure as 6509 3.1.1. Take the k-th teacher model for example, and denote its parameters as Θ(k) T . Given a sentence x′ = {x′ i}L i=1 with L tokens from the unlabeled data Dtgt in the target language, the output probability distribution of entity labels w.r.t the i-th token xi can be derived as Eq. 1 and 2, which is denoted as ˜p(x′ i, Θ(k) T ). To combine all teacher models, we add up their output probability distributions with a group of weights {αk}K k=1 as follows. ˜p(x′ i, ΘT ) = K X k=1 αk · ˜p(x′ i, Θ(k) T ) (6) where ˜p(x′ i, ΘT ) is the combined probability distribution of entity labels, ΘT = {Θ(k) T }K k=1 is the set of parameters of all teacher models, and αk is the weight corresponding to the k-th teacher model, with PK k=1 αk = 1 and αk ≥0, ∀k ∈{1, . . . , K}. 3.2.2 Weighting Teacher Models Here we elaborate on how to derive the weights {αk}K k=1 in cases w/ or w/o unlabeled data in the source languages. Source languages more similar to the target language should generally be assigned higher weights to transfer more knowledge. Without Any Source-Language Data: It is straightforward to average over all teacher models: αk = 1 K , ∀k ∈{1, 2, . . . , K} (7) With Unlabeled Source-Language Data: As no labeled data is available, existing supervised language/domain similarity learning methods for a target task (i.e., NER) (McClosky et al., 2010) are not applicable here. Inspired by Pinheiro (2018), we propose to introduce a language identification auxiliary task for calculating similarities between source and target languages, and then weight teacher models based on this metric. In the language identification task, for the kth source language, each unlabeled sentence u(k) in it is associated with the language index k to build its training dataset, denoted as D(k) src = {(u(k), k)}. We also assume that in the mdimensional language-independent feature space, sentences from each source language should be clustered around the corresponding language embedding vector. We thus introduce a learnable language embedding vector µ(k) ∈Rm for the k-th source language, and then utilize a bilinear operator to measure similarity between a given sentence u and the k-th source language: s(u, µ(k)) = gT (u)Mµ(k) (8) where g(·) can be any language-independent model that outputs sentence embeddings, and M ∈ Rm×m denotes the parameters of the bilinear operator. By building a language embedding matrix P ∈ Rm×K with each µ(k) column by column, and applying a softmax function over the bilinear operator, we can derive language-specific probability distributions w.r.t u as below. q(u, M, P) = softmax gT (u)MP  (9) Then the parameters M and P are trained to identify the language of each sentence in {D(k) src}K k=1, via minimizing the cross-entropy (CE) loss: L(P, M) = −1 Z X (u(k),k)∈Dsrc CE  q(u(k), M, P), k  + γ∥PP T −I∥2 F (10) where Dsrc is the union set of {D(k) src}K k=1, Z = |Dsrc|, ∥· ∥2 F denotes the squared Frobenius norm, and I is an identity matrix. The regularizer in L(P, M) is to encourage different dimensions of the language embedding vectors to focus on different aspects, with γ ≥0 being its weighting factor. With learned M and P = [µ(1), µ(2), . . . , µ(K)], we compute the weights {αk}K i=1 using the unlabeled data in the target language Dtgt: αk = 1 |Dtgt| X x′∈Dtgt exp s(x′, µ(k))/τ  PK i=1 exp s(x′, µ(i))/τ  (11) where τ is a temperature factor to smooth the output probability distribution. In our experiments, we set it as the variance of all values in {s(x′, µ(k))}, ∀x′ ∈Dtgt, ∀k ∈{1, ..., K}, so that αk would not be too biased to either 0 or 1. 3.2.3 Teacher-Student Learning Training: With the combined probability distribution of entity labels from multiple teacher models, i.e., ˜p(x′ i, ΘT ) in Eq. 6, the training loss for the student model is identical to Eq. 3 and 4. Inference: For inference on the target language, we only use the learned student model and make predictions as in the single-source scenario (Eq. 5). 6510 Language Type Train Dev Test English-en Sentence 14,987 3,466 3,684 (CoNLL-2003) Entity 23,499 5,942 5,648 German-de Sentence 12,705 3,068 3,160 (CoNLL-2003) Entity 11,851 4,833 3,673 Spanish-es Sentence 8,323 1,915 1,517 (CoNLL-2002) Entity 18,798 4,351 3,558 Dutch-nl Sentence 15,806 2,895 5,195 (CoNLL-2002) Entity 13,344 2,616 3,941 Table 1: Statistics of the benchmark datasets. 4 Experiments We conduct extensive experiments for 3 target languages (i.e., Spanish, Dutch, and German) on standard benchmark datasets, to validate the effectiveness and reasonableness of our proposed method for single- and multi-source cross lingual NER. 4.1 Settings Datasets We use two NER benchmark datasets: CoNLL-2002 (Spanish and Dutch) (Tjong Kim Sang, 2002); CoNLL-2003 (English and German) (Tjong Kim Sang and De Meulder, 2003). Both are annotated with 4 entity types: PER, LOC, ORG, and MISC. Each language-specific dataset is split into training, development, and test sets. Table 1 reports the dataset statistics. All sentences are tokenized into sequences of subwords with WordPiece (Wu et al., 2016). Following Wu and Dredze (2019), we also use the BIO entity labelling scheme. In our experiments, for each source language, an NER model is trained previously with its corresponding labeled training set. As for the target language, we discard the entity labels from its training set, and use it as unlabeled target-language data Dtgt. Similarly, unlabeled source-language data for learning language similarities (Eq. 10) is simulated via discarding the entity labels of each training set. Network Configurations We leverage the cased multilingual BERTBASE (Wu and Dredze, 2019) for both f(·) in Eq. 1 and g(·) in Eq. 8, with 12 Transformer blocks, 768 hidden units, 12 self-attention head, GELU activations (Hendrycks and Gimpel, 2016), and learned positional embeddings. We use the final hidden vector of the first [CLS] token as the sentence embedding for g(·), and use the mean value of sentence embeddings w.r.t the k-th source language to initialize µ(k) in Eq. 8. es nl de T¨ackstr¨om et al. (2012) 59.30 58.40 40.40 Tsai et al. (2016) 60.55 61.56 48.12 Ni et al. (2017) 65.10 65.40 58.50 Mayhew et al. (2017) 65.95 66.50 59.11 Xie et al. (2018) 72.37 71.25 57.76 Wu and Dredze (2019)† 74.50 79.50 71.10 Moon et al. (2019)† 75.67 80.38 71.42 Wu et al. (2020) 76.75 80.44 73.16 Ours 76.94 80.89 73.22 Table 2: Performance comparisons of single-source cross-lingual NER. † denotes the reported results w.r.t. freezing the bottom three layers of BERTBASE as in this paper. Network Training We implement our proposed method based on huggingface Transformers1. Following Wolf et al. (2019), we use a batch size of 32, and 3 training epochs to ensure convergence of optimization. Following Wu and Dredze (2019), we freeze the parameters of the embedding layer and the bottom three layers of BERTBASE. For the optimizers, we use AdamW (Loshchilov and Hutter, 2017) with learning rate of 5e −5 for teacher models (Wolf et al., 2019), and 1e −4 for the student model (Yang et al., 2019) to converge faster. As for language similarity measuring (i.e., Eq. 10), we set γ = 0.01 following Pinheiro (2018). Besides, we use a low-rank approximation for the bilinear operator M, i.e., M = U T V where U, V ∈Rd×m with d ≪m, and we empirically set d = 64. Performance Metric We use phrase level F1score as the evaluation metric, following Tjong Kim Sang (2002). For each experiment, we conduct 5 runs and report the average F1-score. 4.2 Performance Comparison Single-Source Cross-Lingual NER Table 2 reports the results of different single-source crosslingual NER methods. All results are obtained with English as the source language and others as target languages. It can be seen that our proposed method outperforms the previous state-of-the-art methods. Particularly, compared with the remarkable Wu and Dredze (2019) and Moon et al. (2019), which use nearly the same NER model as our method but is based on direct model transfer, our method obtains significant and consistent improvements in 1https://github.com/huggingface/transformers 6511 es nl de T¨ackstr¨om (2012) 61.90 59.90 36.40 Rahimi et al. (2019) 71.80 67.60 59.10 Chen et al. (2019) 73.50 72.40 56.00 Moon et al. (2019)† 76.53 83.35 72.44 Ours-avg 77.75 80.70 74.97 Ours-sim 78.00 81.33 75.33 Table 3: Performance comparisons of multi-source cross-lingual NER. Ours-avg: averaging teacher models (Eq. 7) . Ours-sim: weighting teacher models with learned language similarities (Eq. 11). † denotes the reported results w.r.t. freezing the bottom three layers of BERTBASE. F1-scores, ranging from 0.51 for Dutch to 1.80 for German. That well demonstrates the benefits of teacher-student learning over unlabeled targetlanguage data, compared to direct model transfer. Moreover, compared with the latest meta-learning based method (Wu et al., 2020), our method requires much lower computational costs for both training and inference, meanwhile reaching superior performance. Multi-Source Cross-Lingual NER Here we select source languages in a leave-one-out manner, i.e., all languages except the target one are regarded as source languages. For fair comparisons, we take Spanish, Dutch, and German as target languages, respectively. Table 3 reports the results of different methods for multi-source cross-lingual NER. Both our teacher-student learning methods, i.e., Ours-avg (averaging teacher models, Eq. 7) and Ours-sim (weighting teacher models with learned language similarities, Eq. 11), outperform previous state-ofthe-art methods on Spanish and German by a large margin, which well demonstrates their effectiveness. We attribute the large performance gain to the teacher-student learning process to further leverage helpful information from unlabeled data in the target language. Though Moon et al. (2019) achieves superior performance on Dutch, it is not applicable in cases where the labeled source-language data is inaccessible, and thus it still suffers from the aforementioned limitation w.r.t. data availability. Moreover, compared with Ours-avg, Ours-sim brings consistent performance improvements. That means, if unlabeled data in source languages is available, using our proposed language similarity measuring method for weighting different teacher es nl de Single-source: Ours 76.94 80.89 73.22 HL 76.60 (-0.34) 80.43 (-0.46) 72.98 (-0.24) MT 75.60 (-1.34) 79.99 (-0.90) 71.76 (-1.46) Multi-source: Ours-avg 77.75 80.70 74.97 HL-avg 77.65 (-0.10) 80.39 (-0.31) 74.31 (-0.66) MT-avg 77.25 (-0.50) 80.53 (-0.17) 74.18 (-0.79) Ours-sim 78.00 81.33 75.33 HL-sim 77.81 (-0.19) 80.27 (-1.06) 74.63 (-0.70) MT-sim 77.12 (-0.88) 80.24 (-1.09) 74.33 (-1.00) Table 4: Ablation study of the proposed teacher-student learning method for cross-lingual NER. HL: Hard Label; MT: Direct Model Transfer; *-avg: averaging source-language models; *-sim: weighting sourcelanguage models with learned language similarities. models can be superior to simply averaging them. 4.3 Ablation Study Analyses on Teacher-Student Learning To validate the reasonableness of our proposed teacherstudent learning method for cross-lingual NER, we introduce the following baselines. 1) Hard Label (HL), which rounds the probability distribution of entity labels (i.e., soft labels output by teacher models) into a one-hot labelling vector (i.e., hard labels) to guide the learning of the student model. Note that in multi-source cases, we use the combined probability distribution of multiple teacher models (Eq. 6) to derive the hard labels. To be consistent with Eq. 3, we still adopt the MSE loss here. In fact, both MSE loss and cross-entropy loss lead to the same observation described in this subsection. 2) Direct Model Transfer (MT), where NO unlabeled target-language data is available to perform teacher-student learning, and thus it degenerates into: a) directly applying the source-language model in single-source cases, or b) directly applying a weighted ensemble of source-language models in multi-source cases, with weights derived via Eq. 6 and Eq. 11. Table 4 reports the ablation study results. It can be seen that using hard labels (i.e., HL-*) would result in consistent performance drops in all crosslingual NER settings, which validates using soft labels in our proposed teacher-student learning method can convey more information for knowledge transfer than hard labels. Moreover, we can also observe that, using direct model transfer (i.e., 6512 #1 Spanish Source-Language Model: ...Etchart [I-PER, 1.00] Sydney [B-LOC, 0.98] ( Australia [B-LOC, 1.00] ) , 23 may ( EFE [O, 0.53] ) . Ours: Por Mario [B-PER] Etchart [I-PER] Sydney [B-LOC] ( Australia [B-LOC] ) , 23 may ( EFE [B-ORG] ) . Examples in Dtgt: Asi lo anunció a EFE [B-ORG, 1.00] Hans Gaasbek, el abogado de Murillo, argumentando que ... #2 Dutch Source-Language Model: Vanderpoorten [O, 0.87] : ' Dit is een eerste stap in de herwaardering van het beroepsonderwijs " Ours: Vanderpoorten [B-PER] : ' Dit is een eerste stap in de herwaardering van het beroepsonderwijs " Examples in Dtgt: Vanderpoorten [B-PER, 0.99] stond op het punt die reputatie te bezwadderen. #3 German Source-Language Model: ... dabei berücksichtigt werden müsse , forderte Hof [B-ORG, 0.85] eine “ Transparenz ” … Ours: Weil die Altersstruktur dabei berücksichtigt werden müsse , forderte Hof [B-PER] eine “ Transparenz ” … Examples in Dtgt: … meint Hof [B-PER, 0.99] , den der " erstaunliche Pragmatismus der Jugendlichen " beeindruckt . Figure 4: Case study on why teacher-student learning works. The GREEN ( RED ) highlight indicates a correct (incorrect) label. The real-valued numbers indicate the predicted probability corresponding to the entity label. es nl de Ours 78.00 81.33 75.33 cosine 77.86 (-0.14) 79.94 (-1.39) 75.24 (-0.09) ℓ2 77.72 (-0.28) 79.74 (-1.59) 75.09 (-0.24) Table 5: Comparison between the proposed language similarity measuring method and the commonly used cosine/ℓ2 metrics for multi-source cross-lingual NER. MT-*) would lead to even more significant performance drops in all cross-lingual NER settings (up to 1.46 F1-score). Both demonstrate that leveraging unlabeled data in the target language can be helpful, and that the proposed teacher-student learning method is capable of leveraging such information effectively for cross-lingual NER. Analyses on Language Similarity Measuring We further compare the proposed language similarity measuring method with other commonly used unsupervised metrics, i.e., cosine similarity and ℓ2 distance. Specifically, s(x′, µ(k)) in Eq. 11 is replaced by cosine similarity or negative ℓ2 distance between x′ and the mean value of sentence embeddings w.r.t the k-th source language. As shown in Table 5, replacing the proposed language similarity measuring method with either cosine / ℓ2 metrics leads to consistent performance drops across all target languages. This further demonstrates the benefits of our language identification based similarity measuring method. 4.4 Why Teacher-Student Learning Works? By analyzing which failed cases of directly applying the source-language model are corrected by the proposed teacher-student learning method, we try to bring up insights on why teacher-student learning works, in the case of single-source cross-lingual NER. 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 probability of the prediction 0.0 0.2 0.4 0.6 0.8 1.0 percentage es nl de Figure 5: Percentage of corrected mispredictions, in different probability intervals. Firstly, teacher-student learning can probably help to learn label preferences for some specific words in the target language. Specifically, if a word appears in the unlabeled target-language data and the teacher model consistently predicts it to be associated with an identical label with high probabilities, the student model would learn the preferred label w.r.t that word, and predict it in cases where the sentence context may not provide enough information. Such label preference can help the predictions for tokens that are less ambiguous and generally associated with an identical entity label. As illustrated in Figure 4, in example #1, the source-language (teacher) model, fails to identify “EFE” as an ORG in the test sentences, while the student model (i.e., Ours) can correctly label it, because it has seen “EFE” labeled as ORG by the teacher model with high probabilities in the unlabeled target-language data Dtgt. Similar results can also be observed in example #2 and #3. Moreover, teacher-student learning may help to find a better classifying hyperplane for the student NER model with unlabelled target-language data. Actually, we notice that the source-language model generally makes correct label predictions with higher probabilities, and makes mispredictions with relatively lower probabilities. By calcu6513 lating the proportion of its mispredictions that are corrected by our teacher-student learning method in different probability intervals, we find that our method tends to correct the low-confidence mispredictions, as illustrated in Figure 5. We conjecture that, with the help of unlabeled target-language data, our method can probably find a better classifying hyperplane for the student model, so that the low-confidence mispredictions, which are closer to the classifying hyperplane of the source-language model, can be clarified. 5 Conclusion In this paper, we propose a teacher-student learning method for single-/multi-source cross-lingual NER, via using source-language models as teachers to train a student model on unlabeled data in the target language. The proposed method does not rely on labelled data in the source languages and is capable of leveraging extra information in the unlabelled target-language data, which addresses the limitations of previous label-projection based and model-transfer based methods. We also propose a language similarity measuring method based on language identification, to better weight different teacher models. Extensive experiments on benchmark datasets show that our method outperforms the existing state-of-the-art approaches. References Cristian Bucilu, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 535–541. ACM. Xilun Chen, Ahmed Hassan Awadallah, Hany Hassan, Wei Wang, and Claire Cardie. 2019. Multisource cross-lingual model transfer: Learning what to share. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3098–3112. Jason P.C. Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics, 4:357–370. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186. Jan Vium Enghoff, Søren Harrison, and ˇZeljko Agi´c. 2018. Low-resource named entity recognition via multi-source projection: Not quite there yet? In Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text, pages 195–201. Dan Hendrycks and Kevin Gimpel. 2016. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. CoRR, abs/1606.08415. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Lifu Huang, Heng Ji, and Jonathan May. 2019. Crosslingual multi-level adversarial transfer to enhance low-resource name tagging. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3823–3833, Minneapolis, Minnesota. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260–270. Ilya Loshchilov and Frank Hutter. 2017. Fixing weight decay regularization in adam. arXiv preprint arXiv:1711.05101. Stephen Mayhew, Chen-Tse Tsai, and Dan Roth. 2017. Cheap translation for cross-lingual named entity recognition. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2536–2545. David McClosky, Eugene Charniak, and Mark Johnson. 2010. Automatic domain adaptation for parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 28– 36. Taesun Moon, Parul Awasthy, Jian Ni, and Radu Florian. 2019. Towards lingua franca named entity recognition with bert. arXiv preprint arXiv:1912.01389. Jian Ni, Georgiana Dinu, and Radu Florian. 2017. Weakly supervised cross-lingual named entity recognition via effective annotation and representation projection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1470–1480. Shuke Peng, Xinjing Huang, Zehao Lin, Feng Ji, Haiqing Chen, and Yin Zhang. 2019. Teacherstudent framework enhanced multi-domain dialogue generation. arXiv preprint arXiv:1908.07137. 6514 Matthew Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1756–1765. Pedro O Pinheiro. 2018. Unsupervised domain adaptation with similarity learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8004–8013. Afshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Massively multilingual transfer for NER. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 151–164. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538. Oscar T¨ackstr¨om. 2012. Nudging the envelope of direct transfer methods for multilingual named entity recognition. In Proceedings of the NAACL-HLT Workshop on the Induction of Linguistic Structure, pages 55–63. Oscar T¨ackstr¨om, Ryan McDonald, and Jakob Uszkoreit. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 477–487. Xu Tan, Yi Ren, Di He, Tao Qin, and Tie-Yan Liu. 2019. Multilingual neural machine translation with knowledge distillation. In International Conference on Learning Representations. Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002). Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142–147. Chen-Tse Tsai, Stephen Mayhew, and Dan Roth. 2016. Cross-lingual named entity recognition via wikification. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 219–228. Mengqiu Wang and Christopher D. Manning. 2014. Cross-lingual projected expectation regularization for weakly supervised learning. Transactions of the Association for Computational Linguistics, 2:55–66. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, et al. 2019. Transformers: State-of-theart natural language processing. arXiv preprint arXiv:1910.03771. Qianhui Wu, Zijia Lin, Guoxin Wang, Hui Chen, B¨orje F Karlsson, Biqing Huang, and Chin-Yew Lin. 2020. Enhanced meta-learning for cross-lingual named entity recognition with minimal resources. In Proceedings of the AAAI Conference on Artificial Intelligence. Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833–844. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Jiateng Xie, Zhilin Yang, Graham Neubig, Noah A. Smith, and Jaime Carbonell. 2018. Neural crosslingual named entity recognition with minimal resources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 369–379. Ze Yang, Linjun Shou, Ming Gong, Wutao Lin, and Daxin Jiang. 2019. Model compression with two-stage multi-teacher knowledge distillation for web question answering system. arXiv preprint arXiv:1910.08381. Shan You, Chang Xu, Chao Xu, and Dacheng Tao. 2017. Learning from multiple teacher networks. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1285–1294. ACM. Ayah Zirikly and Masato Hagiwara. 2015. Crosslingual transfer of named entity recognizers without parallel corpora. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 390–396.
2020
581
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6515–6524 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6515 Synchronous Double-channel Recurrent Network for Aspect-Opinion Pair Extraction Shaowei Chen, Jie Liu∗, Yu Wang, Wenzheng Zhang, Ziming Chi College of Artificial Intelligence, Nankai University, Tianjin, China {chenshaowei, yuwang17, wzzhang}@mail.nankai.edu.cn [email protected] [email protected] Abstract Opinion entity extraction is a fundamental task in fine-grained opinion mining. Related studies generally extract aspects and/or opinion expressions without recognizing the relations between them. However, the relations are crucial for downstream tasks, including sentiment classification, opinion summarization, etc. In this paper, we explore AspectOpinion Pair Extraction (AOPE) task, which aims at extracting aspects and opinion expressions in pairs. To deal with this task, we propose Synchronous Double-channel Recurrent Network (SDRN) mainly consisting of an opinion entity extraction unit, a relation detection unit, and a synchronization unit. The opinion entity extraction unit and the relation detection unit are developed as two channels to extract opinion entities and relations simultaneously. Furthermore, within the synchronization unit, we design Entity Synchronization Mechanism (ESM) and Relation Synchronization Mechanism (RSM) to enhance the mutual benefit on the above two channels. To verify the performance of SDRN, we manually build three datasets based on SemEval 2014 and 2015 benchmarks. Extensive experiments demonstrate that SDRN achieves state-of-the-art performances. 1 Introduction Opinion entity extraction, which aims at identifying aspects and/or opinion expressions in review sentences, is an important task in fine-grained opinion mining. Recently, there have been considerable studies focused on this task. Specifically, Liu et al. (2012), Li and Lam (2017) and Li et al. (2018) explored aspect term extraction, and Fan et al. (2019) extracted opinion phrases with given aspects. Meanwhile, many studies dealt with aspect and opinion ∗Corresponding author. Review: The food was nice-looking and delicious. The result of Opinion Entity Extraction: Aspect: {food} Opinion Expression: {nice-looking, delicious} The result of Aspect-Opinion Pair Extraction: {food, nice-looking} {food, delicious} Figure 1: An example of task comparisons. The aspects and the opinion expressions are marked with red and blue, respectively. term co-extraction (Xu et al., 2013; Liu et al., 2015; Wang et al., 2017; Yu et al., 2019; Wang and Pan, 2019; Dai and Song, 2019). These studies have shown the importance of opinion entity extraction and achieved great progress. However, they neglect to recognize the relations between aspects and opinion expressions. While aspect-opinion relation detection is one of the key parts of an opinion mining system (Hu and Liu, 2004; Popescu and Etzioni, 2005; Zhuang et al., 2006), it is neglected or assumed given beforehand, which leaves a significant gap to subsequent opinion mining tasks. For instance, as shown in Figure 1, we can obtain the aspect {food} and the opinion expressions {nice-looking, delicious} from opinion entity extraction. Although both nicelooking and delicious express positive sentiment, they further describe food from the appearance and taste perspectives, respectively. Therefore, only with the relations between aspects and opinion expressions, e.g., the pair ⟨food, delicious⟩, can the more fine-grained subsequent tasks be executed, such as pair-level sentiment classification, pairlevel opinion clustering, etc. To bridge the gap between opinion entity extraction and subsequent tasks, we explore AspectOpinion Pair Extraction (AOPE) task, which aims at extracting aspects and opinion expressions along 6516 with their relations. Specially, AOPE is not only necessary for subsequent tasks, but also beneficial to both opinion entity extraction and relation detection. However, the studies on AOPE are very limited. Early works (Hu and Liu, 2004; Zhuang et al., 2006) approach aspect-opinion pair extraction in a pipeline manner by dividing it into two isolated tasks. Yang and Cardie (2013), Klinger and Cimiano (2013b) and Katiyar and Cardie (2016) attempted to extract opinion entities and relations jointly without considering the interaction between opinion entity extraction and relation detection, which limits the performance. Therefore, AOPE remains a rather challenging task. First, the relational structure of aspects and opinion expressions within a sentence can be complicated, requiring the model to be effective and flexible in detecting relations. For example, the relations can be one-to-many, many-to-one, and even embedded or overlapped. Second, opinion entity extraction and relation detection are not two independent tasks as in other multitask learning problems but rely on each other, hence posing a key challenge on how to fuse and learn the two subtasks properly. Third, how to synchronize opinion entity extraction with relation detection and make them mutually promotion is another primary challenge. To address the aforementioned challenges, we propose Synchronous Double-channel Recurrent Network (SDRN). Specifically, we first utilize Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) to learn context representations. Then, the double-channel recurrent network, which consists of an opinion entity extraction unit and a relation detection unit, is constructed to extract aspects, opinion expressions, and relations simultaneously. To enable the information interaction between the above two channels, we design a synchronization unit which contains Entity Synchronization Mechanism (ESM) and Relation Synchronization Mechanism (RSM). Extensive experiments verify that our model achieves state-ofthe-art performances. In summary, our contributions are three-fold: • We explore AOPE task, which is valuable and critical for downstream tasks but remains under-investigated. • We propose an end-to-end neural model, SDRN1. By adopting BERT as the encoding 1https://github.com/NKU-IIPLab/SDRN layer, SDRN can learn richer context semantics. By designing the double-channel network and two synchronization mechanisms, SDRN could process opinion entity extraction and relation detection jointly and make them mutually beneficial. • We manually build three datasets based on SemEval 2014 and 2015 benchmarks for AOPE task. Extensive experiments are conducted to verify that our model achieves state-of-the-art performances. 2 Related Work Aspect-opinion pair extraction is a critical task in fine-grained opinion mining. Early studies approach this task in a pipeline manner. Hu and Liu (2004) used association mining to identify aspects and extract the adjacent adjectives as opinions. Zhuang et al. (2006) extracted aspects and opinion expressions first, and then mined the relations with dependency relation templates. Popescu and Etzioni (2005) proposed an unsupervised model to extract aspects and corresponding opinions from reviews with pre-defined rules. Although the above methods achieved great progress, they generally suffered from error propagation. To avoid error propagation, recent studies propose joint learning methods. Klinger and Cimiano (2013a) adopted Imperatively Defined Factor graph (IDF) to analyze the inter-dependencies between aspects and opinion expressions. Klinger and Cimiano (2013b) presented a joint inference model based on IDF to extract aspect terms, opinion terms, and their relations. Yang and Cardie (2013) employed Integer Linear Programming (ILP) to identify opinion-related entities and their associated relations jointly. However, these works generally based on shallow machine learning methods and depended on hand-crafted features. To automatically capture features, neural network methods have been applied to various finegrained opinion mining tasks. Xu et al. (2018) used Convolutional Neural Network (CNN) to extract aspects. Wang et al. (2016), Wang et al. (2017),Yu et al. (2019) and Wang and Pan (2019) used deep learning methods to deal with aspect and opinion term co-extraction. Li et al. (2018) focused on aspect term extraction and adopted attention mechanism to exploit the latent relations between aspect and opinion terms. Hu et al. (2019) took BERT to extract aspects and corresponding sentiments. 6517 !"# $!"# % = & '!,# ( '!,) ( '*,+ ( … -! Supervised Self-Attention = ! "#,$ % "#,& % "',( % … )$ # )& # )( ' … *# = 2 "&,$ % "&,& % "',( % … )$ & )& & )( ' … *& = 2 "&,$ + "&,& + "',( + … -& Supervised Self-Attention = 1 !",# $ !",% $ !",& $ … '# " '% " '& " … (" = 1 !",# ) !",% ) !",& ) … *" Supervised Self-Attention ! "! #$ #% #& … #' ($ (% (& … (' … … )$ * )% * )& * … )' * … … BERT Channel 1 Channel 2 ! "! Figure 2: The framework of Synchronous Doublechannel Recurrent Network (SDRN). For AOPE, Katiyar and Cardie (2016) explored LSTM-based models to jointly extract opinion entities and their relations with three optimization methods. But this method neglects to learn the interaction between opinion entity extraction and relation detection. Therefore, AOPE is still under-investigated and needs more researches. In this paper, we further explore this task and propose a neural model SDRN. 3 Model Given a review sentence S, Aspect-Opinion Pair Extraction (AOPE) task aims to obtain a collection of aspect-opinion pairs C = [⟨am, om⟩]M m=1 from S, where am and om represent the aspect and the opinion expression, respectively2. To deal with AOPE task, we propose Synchronous Double-channel Recurrent Network (SDRN). The overall framework of SDRN is illustrated in Figure 2. Specifically, we first adopt BERT as the encoding layer to learn the context representations. Then, an opinion entity extraction unit and a relation detection unit are constructed as double channels to extract aspects, opinion expressions, and relations simultaneously. Furthermore, a synchronization unit is designed to enable information interaction between the double channels. To capture high-level representations, we recurrently execute the above units. After multiple recurrent steps, we adopt an inference layer to obtain aspect-opinion pairs. 2Note that am or om could be a single word or a phrase. 3.1 Encoding Layer Given a review sentence S, we first tokenize it using the WordPiece vocabulary (Wu et al., 2016) and add tokens [CLS] and [SEP] to the beginning and the end of the tokenized sentence, respectively. As a result, we obtain the input sequence X = {x1, x2, ..., xN} with N tokens for each sentence. Inspired by the success of BERT (Devlin et al., 2019), we adopt it as the encoder to learn the contextual semantics. For each token xi, the initial embedding ei is constructed by summing the corresponding token embedding ew i , segment embedding es i, and position embedding ep i . Then, the embedding sequence E = {e1, e2, ..., eN} is fed into BERT, which consists of stacked Transformer blocks with multiple self-attention heads (Vaswani et al., 2017). We take the output of the last Transformer block as the context representation sequence Hs = {hs 1, hs 2, ..., hs N}. 3.2 Double-channel Recurrent Network 3.2.1 Opinion Entity Extraction Unit The opinion entity extraction unit, which aims at extracting the aspects and the opinion expressions, is developed as a channel of SDRN. To deal with this sequence labeling task, we couple Conditional Random Field (CRF) (Lafferty et al., 2001) upon the encoding layer, which serves as the opinion entity extraction unit. Formally, CRF adopts a state score matrix P ∈RN×K to model the mappings between tokens and labels, and a transition score matrix Q ∈RK×K to model the relations between adjacent labels, where K denotes the dimension of the label space3. For a sequence of predicted labels Y t =  yt 1, yt 2, ..., yt N at the t-th recurrent step, we define its score as follows: S(X, Y t) = N X i=1 Qyt i−1,yt i + N X i=1 P t i,yt i, (1) P t = Ho t Wp + bp, (2) where Ho t = n ho t,1, ho t,2, ..., ho t,N o denotes the input hidden representation sequence at the t-th recurrent step for the opinion entity extraction unit, which is calculated with the context representation sequence Hs and the relation synchronization semantics Rt−1. The details will be described in 3Following the BIO tagging scheme, we define five labels, including BA (beginning of aspect), IA (inside of aspect), BP (beginning of opinion expression), IP (inside of opinion expression), and O (others). 6518 Section 3.3.2. The matrices Wp ∈Rdo×K and bp ∈RN×K are model parameters, where do denotes the dimension of hidden representation ho t,i. Then, the probability of the predicted sequence Y t can be calculated as follows: p Y t | X  = exp(S(X, Y t)) P eY t∈Y t X exp(S(X, eY t)) , (3) where Y t X denotes all possible label sequences. During training, we maximize the likelihood probability p (Y | X) of gold label sequence at the last step. During decoding, we use the Viterbi algorithm to find the label sequence with the maximum score. 3.2.2 Relation Detection Unit To extract opinion entities and relations simultaneously, we design a relation detection unit as another channel of SDRN. Considering the complicated relations between aspects and opinion expressions, we devise a supervised self-attention mechanism as the relation detection unit to flexibly model tokenlevel relations without the sequential limitation. At the t-th recurrent step, we first compute the attention matrix Gt ∈RN×N whose element gt i,j represents the degree of correlation between the i-th token and the j-th token as follows: gt i,j = exp  γ  hr t,i, hr t,j  PN k=1 exp  γ  hr t,i, hr t,k , (4) γ hr t,i, hr t,j  = tanh hr t,iW 1 r + hr t,jW 2 r  W 3 r , (5) where γ is a score function, and hr t,i denotes the input hidden representation of the i-th token for the relation detection unit. Note that the hidden representation sequence Hr t = n hr t,1, hr t,2, ..., hr t,N o is calculated with the context representation sequence Hs and the entity synchronization semantics Ut−1. The details will be described in Section 3.3.1. The matrices W 1 r ∈Rdr×dr, W 2 r ∈Rdr×dr, and W 3 r ∈Rdr×1 are model parameters, where dr is the dimension of hidden representation hr t,i. At the last step T, we further introduce supervision information into the calculation of the attention matrix GT by maximizing the likelihood probability as follows: p (Z|X) = N Y i=1 N Y j=1 p (zi,j|xi, xj) , (6) where the standard relation matrix Z ∈RN×N consists of element zi,j, and the relation probability p (zi,j|xi, xj) can be calculated as follows: p (zi,j|xi, xj) =  gT i,j, if zi,j = 1 1 −gT i,j, if zi,j = 0 , (7) where zi,j = 1 denotes the fact that there is a relation between the i-th token and the j-th token, and vice versa. With this supervision information, the attention can be guided to capture the correlations between the tokens more effectively. 3.3 Synchronization Unit Since the above two channels are interdependent, it is important to synchronize their information and make them mutually beneficial. To this end, we design Entity Synchronization Mechanism (ESM) and Relation Synchronization Mechanism (RSM) to update the hidden representation sequences Ho t and Hr t by exchanging the high-level information. 3.3.1 Entity Synchronization Mechanism Considering that opinion entities are generally phrases, both opinion entity semantics and tokenlevel interactions are crucial in detecting relations. For instance, given an aspect ‘hot dog’ and an opinion expression ‘tasty’, there is no relation between ‘hot’ and ‘tasty’ when only token-level interaction is considered, but it is easy to detect the relation if we utilize the semantics of aspect ‘hot dog’. Accordingly, we design ESM to capture the corresponding entity semantics for each token and integrate these semantics into the hidden representation sequence Hr t+1. Specifically, based on the predicted label sequence Y t and its probability obtained from the opinion entity extraction unit, each entity semantics ut,i of the i-th token at the t-th recurrent step can be calculated as follows: ut,i = N X j=1 ϕ(Bt i,j)hs j, (8) ϕ(Bt i,j) = Bt i,j PN k=1 Bt i,k , (9) where Bt i,j is the label probability of the j-th token if the i-th token and the j-th token belong to the same entity; otherwise, Bt i,j is zero. And ϕ(·) is a normalization function. To integrate both the context representation hs i and the entity semantics ut,i, we calculate the hidden representation hr t+1,i as follows: hr t+1,i = σ(ut,iW 4 r + hs iW 5 r ), (10) 6519 where W 4 r ∈Rds×dr and W 5 r ∈Rds×dr are model parameters, ds is the dimension of context representation, and σ is the activation function which can be tanh or sigmoid function. Note that we use zero matrix to initialize the entity semantics sequence U0 = {u0,1, u0,2, ..., u0,N}. 3.3.2 Relation Synchronization Mechanism Since the relations between opinion entities can provide clues for opinion entity extraction, it’s important to encode the relation semantics. For example, if ‘overrated’ is used to modify ‘pizza’, this relation could provide guidance to extract the aspect ‘pizza’ and the opinion expression ‘overrated’. Thus, we design RSM to capture the semantics which reflect the relations and update the hidden representation sequence Ho t+1. Concretely, at the t-th recurrent step, we can calculate the relation semantics rt,i of the i-th token with the correlated degree gt i,j from the relation detection unit: rt,i = N X j=1 ϕ(φ(gt i,j))hs j, (11) φ(gt i,j) = gt i,j, if gt i,j ⩾β 0, if gt i,j < β , (12) where ϕ(·) is the same normalization function as Eq.(9). To avoid noise, we utilize φ(·) to filter correlated scores below the given threshold β. Then, we combine the relation semantics rt,i and context representation hs i to obtain the hidden representation ho t+1,i: ho t+1,i = σ rt,iW 1 o + hs iW 2 o  , (13) where W 1 o ∈Rds×do and W 2 o ∈Rds×do are model parameters. Similar to ESM, the initial relation semantics sequence R0 = {r0,1, r0,2, ..., r0,N} is set to zero. Particularly, the integration methods used in ESM and RSM can also make the proposed SDRN easy to optimize, which is similar to the shortcut connections (He et al., 2016). 3.4 Joint Learning To synchronously learn the proposed two channels, we fuse the loss functions from the two channels. For opinion entity extraction unit, given the gold label sequence Y , we minimize the negative loglikelihood loss function at the last step as follows: LE = log X eY ∈Y T X exp  S  X, eY  −S (X, Y ) . (14) For the relation detection unit, we convert the gold annotation to a one-hot matrix, where 0 denotes no relations, and 1 represents the existence of relations between two tokens. Then, we minimize the cross-entropy loss between the predicted distribution ˆp (zi,j|xi, xj) at the last step and the gold distribution p (zi,j|xi, xj) as follows: LR = − N X i=1 N X j=1 p(zi,j|xi, xj)log [ˆp(zi,j|xi, xj)] . (15) Then, the two parts are combined to construct the loss objective of the entire model: L (θ) = LE + LR. (16) The optimization problems in Eq. (16) can be solved by using any gradient descent method. In this paper, we adopt the BERTAdam method. 3.5 Inference Layer Because SDRN synchronously processes opinion entity extraction and relation detection, an inference layer is introduced to generate aspect-opinion pairs based on the results of the two channels. With the label sequence Y T predicted by the opinion entity extraction unit at the last recurrent step, we can obtain the aspect set A = {a1, a2, ..., alA} with lA aspects and the opinion set O = {o1, o2, ..., olO} with lO opinion expressions. Then, the relations between aspects and opinion expressions can be calculated according to the weight matrix GT from the relation detection unit. For instance, given an aspect a =  xia S, ..., xia E and an opinion expression o =  xio S, ..., xio E , the correlated degree δ between them can be calculated as follows: δ = 1 2  1 |a| ia E X k=ia S io E X l=io S gk,l + 1 |o| io E X l=io S ia E X k=ia S gl,k  , (17) where |a| and |o| denote the length of aspect and opinion expression. The pair ⟨a, o⟩is extracted only if δ is higher than a given threshold ˆδ. 6520 Dataset #Sent #A #O #R SemEval-14 Train 3041 3693 3512 2809 Restaurant Test 800 1134 1014 936 SemEval-14 Train 3045 2359 2500 1535 Laptop Test 800 653 677 380 SemEval-15 Train 1315 1205 1217 1231 Restaurant Test 685 542 516 516 JDPA Camera 3125 6107 4557 4144 Car 6501 8272 11123 8709 MPQA 9471 4676 5849 4823 Table 1: Statistics of datasets. #Sent, #A, #O, and #R represent the number of sentences, aspects, opinion expressions, and relations, respectively. 4 Experiments 4.1 Datasets To evaluate the effectiveness of SDRN, we conduct extensive experiments on five benchmark datasets from SemEval 20144 (Pontiki et al., 2014), SemEval 20155 (Pontiki et al., 2015), MPQA version 2.0 corpus6 (Wiebe et al., 2005), and J.D. Power and Associates Sentiment Corpora7 (JDPA) (Kessler et al., 2010). The statistics of these benchmark datasets are shown in Table 1. For SemEval 2014 and 2015 datasets, we manually build relations between aspects and opinion expressions because the original datasets only contain the gold standard annotation for aspects. Note that we follow the annotations for opinion expressions provided by Wang et al. (2016) and Wang et al. (2017). 4.2 Experimental Setting We adopt the BERTBASE8 model, which consists of 12 Transformer blocks with 12 self-attention heads, as the encoding layer of SDRN. The dimensions of both the embeddings and the context representation in BERTBASE are 768. To enhance the information interaction between the double channels, we set the recurrent step to 2. During training, we use the BERTAdam optimizer with 0.1 warmup rate. The learning rate is set to 2e-5 and 0.001 for finetuning BERT and training our model, respectively. Meanwhile, we set the batch size to 10 and the dropout rate to 0.5. With the cross-validation, other hyper-parameters are set as follows: do = 250, dr = 250, β = 0.1, and ˆδ = 0.5. 4http://alt.qcri.org/semeval2014/task4/ 5http://alt.qcri.org/semeval2015/task12/ 6http://www.cs.pitt.edu/mpqa/ 7http://verbs.colorado.edu/jdpacorpus/ 8https://github.com/google-research/bert 4.3 Evaluation We use F1-score to evaluate the performance of SDRN. We consider a predicted aspect-opinion pair is correct if the gold standard annotations contain a pair the same as the prediction. Besides, following Katiyar and Cardie (2016), we report Binary Overlap F1-score for MPQA dataset. 4.4 Baselines To achieve the comprehensive and comparative analysis of SDRN, we compare it with two kinds of models, including Pipeline methods9 and Joint methods. 4.4.1 Pipeline method For Pipeline methods, we first select five advanced extraction models to recognize opinion entities. Then, we train the relation detection unit (RD) separated from SDRN with BERT to detect relations. The details about RD are described in Section 3.2.2. The outputs of the extraction models are fed into the RD model to predict relations and obtain aspectopinion pairs. The details of the five extraction models are described as follows: • HAST (Li et al., 2018) exploits two useful clues, namely opinion summary and aspect detection history, to extract the aspects with the help of opinion information. Note that HAST can also extract aspects and opinion expressions simultaneously. • DE-CNN (Xu et al., 2018) is a simple but outstanding CNN model employing two types of pre-trained embeddings, including generalpurpose and domain-specific embeddings. We trained two DE-CNN models for aspect and opinion expression extraction, respectively. • IMN (He et al., 2019) is an interactive multitask learning network which jointly learns multiple tasks, including aspect and opinion term co-extraction, aspect-level sentiment classification, etc. • SPAN (Hu et al., 2019) is a span-based extraction framework based on BERT. We trained two SPAN models for aspect and opinion expression extraction, respectively. • RINANTE (Dai and Song, 2019) is a weak supervised opinion entity extraction model 9The Pipeline models are expressed in the form of ‘{*}+{#}’, where ‘*’ means the opinion entity extraction method and ‘#’ is the relation detection method. 6521 trained with human-labeled data and rule labeled auxiliary data. 4.4.2 Joint method To sufficiently verify the performance of SDRN, we also compare it with Joint models: IDF (Klinger and Cimiano, 2013b), CRF+ILP (Yang and Cardie, 2013), and LSTM+SLL+RLL (Katiyar and Cardie, 2016). The details can be found in Section 2. 4.5 Experimental Results We demonstrate and analyze the experimental results to answer the following research questions: • How does SDRN perform compared with the baselines on AOPE task? • Can the performance of opinion entity extraction subtask be improved by the joint learning with relation detection? • Does the synchronization unit promote the information interaction and further enhance the joint learning? 4.5.1 Pair Extraction The comparison results of aspect-opinion pair extraction are shown in Table 2 and Table 3. According to the results, SDRN consistently obtains the state-of-the-art performances on five datasets. Compared to the best pipeline model, SDRN outperforms SPAN+RD by 2.31%, 1.14% and 3.39% on 14-Res, 14-Lap and 15-Res, respectively. This indicates that the joint model can effectively avoid the error propagation led by pipeline models. Furthermore, SPAN+RD outperforms other baselines, which shows that BERT can capture rich context representations. Besides, HAST+RD, IMN+RD and RINANTE+RD, which utilize the aspect and opinion term co-extraction models, achieve better performances than DE-CNN+RD. This shows that it is helpful to detect relations with considering latent relations between aspects and opinion expressions during the extraction phase. We also compare SDRN with joint models on JDPA and MPQA datasets, and the results are reported using 10-fold cross validation. According to Table 3, our model brings significant improvements without any hand-crafted features. Particularly, for pair extraction, the results of IDF Joint are 7.4% and 10.5% inferior to IDF Pipeline on JDPA Camera and JDPA Car datasets. This illustrates that joint models may worse than pipeline models without adequate information interaction between opinion entity extraction and relation detection. Models 14-Res 14-Lap 15-Res Pipeline HAST+RD 73.55 64.05 65.20 DE-CNN+RD 71.02 61.11 64.19 IMN+RD 73.69 62.98 65.56 SPAN+RD 74.17 65.99 67.55 RINANTE+RD 74.34 64.17 65.42 Joint SDRN w/o ESM 74.60 66.57 69.28 SDRN w/o RSM 75.01 66.43 69.33 SDRN w/o ESM&RSM 74.28 65.74 67.67 SDRN 76.48 67.13 70.94 Table 2: Experimental results of the aspect-opinion pair extraction compared on three SemEval datasets (F1 score, %). Note that the improvements over the baselines are significant (p < 0.05). Models JDPA Camera JDPA Car MPQA IDF Pipeline 21.5 26.6 N/A IDF Joint 14.1 16.1 N/A CRF+ILP N/A N/A 57.04 LSTM+SLL+RLL N/A N/A 54.98 SDRN 48.63 47.85 63.95 Table 3: Experimental results of aspect-opinion pair extraction compared on JDPA and MPQA datasets (F1 score, %). Note that the improvements are significant (p < 0.05). 4.5.2 Opinion Entity Extraction Although our task aims to identify the aspectopinion pairs, it is interesting to investigate the performance of opinion entity extraction. Hence, we compare SDRN with representative aspect and opinion expression extraction methods. The results are shown in Table 4. It is clearly shown that SDRN achieves state-of-the-art results on three datasets, which proves that the opinion entity extraction can be significantly improved by joint training with relation detection. Besides, the aspect and opinion term co-extraction models generally superior to aspect term extraction models, which demonstrates that joint extracting aspects and opinion expressions can benefits each other. HAST and SPAN are special cases of aspect term extraction models, because HAST extracts aspects with the help of opinion semantics, and SPAN adopts BERT as the backbone model. 4.5.3 Synchronization Unit To investigate the efficacy of the synchronization unit composed of ESM and RSM, we perform ablation study and list the results in the second block of Table 2. Concretely, for ‘SDRN w/o ESM’, we drop ESM and simply update the relation hidden representation Hr t via a fully-connection layer. Similarly, ‘SDRN w/o RSM’ drops RSM and adopts a fully-connection layer to update the entity hidden representation Ho t . For ‘SDRN w/o ESM&RSM’, we simultaneously do the above two operations. 6522 Models 14-Res 14-Lap 15-Res A O A O A O WDEmb (Yin et al., 2016) 84.97 N/A 75.16 N/A 69.73 N/A RNCRF† (Wang et al., 2016) 84.93 84.11 78.42 79.44 67.74 67.62 CMLA† (Wang et al., 2017) 85.29 83.18 77.80 80.17 70.73 73.68 HAST (Li et al., 2018) 85.61 85.46* 79.52 78.58* 71.46 70.77* DE-CNN (Xu et al., 2018) 85.20 81.99* 81.59 76.34* 68.28 68.56* IMN† (He et al., 2019) 83.33 85.61 77.96 77.51 70.04 71.94 SPAN (Hu et al., 2019) 86.20* 86.52* 80.67* 82.07* 73.65* 79.13* GMTCMLA† (Yu et al., 2019) 84.50 85.20 78.69 79.89 70.53 72.78 RINANTE† (Dai and Song, 2019) 86.45 85.67 80.16 81.96 69.90 72.09 SDRN 89.49 87.84 83.67 82.25 74.05 79.65 Table 4: Experimental results of opinion entity extraction (F1 score, %). A and O represent the aspect extraction and the opinion expression extraction, respectively. The methods with ‘†’ are aspect and opinion term co-extraction models, and others are aspect term extraction models. The results with ‘*’ are reproduced by us, and others are copied from the released paper. Note that the improvements over baselines are significant (p < 0.05). Reviews SPAN+RD SDRN w/o ESM&RSM SDRN 1. The receiver was full of [superlatives]1,2 for the [quality]1 and [performance]2. (quality, superlatives) (performance, superlatives) (receiver, superlatives)  (quality, superlatives) (performance, superlatives) (quality, superlatives) (performance, superlatives) 2. The [selection of food]1 is [excellent]1, and the [atmosphere]2 is [great]2. (selection, excellent)  (food, excellent)  (atmosphere, great) (selection of food, excellent) (atmosphere, great) (selection of food, excellent) (atmosphere, great) 3. The [bartenders]1 and the [managers]2 are really [nice]1,2 and the [decor]3,4,5 is very [comfy]3 and [laid-back]4, all the while being [trendy]5. (bartenders, nice) (managers, nice) (decor, comfy) (decor, trendy) (bartenders, nice) (managers, nice) (decor, comfy) (-, laid-back)  (decor, trendy) (bartenders, nice) (managers, nice) (decor, comfy) (decor, laid-back) (decor, trendy) Table 5: Case Study. The gold standard aspects and opinion expressions are in red and blue, respectively. The gold standard relations are indexed by subscripts, where aspect and opinion expression in a pair have the same subscript. 0 200 400 600 800 1000 1200 1400 1600 1800 0 5 10 15 20 25 30 35 40 45 50 Loss Epoch 14-Res 14-Lap 15-Res (a) loss 64 66 68 70 72 74 76 78 14-Res 14-Lap 15-Res F1 Score Dataset Step = 1 Step = 2 Step = 3 Step = 4 Step = 5 (b) step Figure 3: (a) The analysis of convergence. (b) The comparisons under varying number of recurrent steps. Compared with Pipeline models, ‘SDRN w/o ESM&RSM’ is less competitive, which demonstrates that merely joint learning is not superior to the pipeline manner. By utilizing ESM or RSM, the performance is improved, which shows that either ESM or RSM is helpful. Specially, the contribution of ESM is slightly larger than RSM. Moreover, with the two synchronization mechanisms, SDRN surpasses all the baselines. 4.6 Convergence and Sensitivity Study In Figure 3(a), we verify the convergence of SDRN. The result shows that our model generally achieves convergence around 15 epochs. Besides, we present I left the Four Seasons very disappointed . The food was excellent as well as service , however , disappointed Seasons Four service excellent food Figure 4: Visualization of attention scores. The aspects and the opinion expressions are marked with red and blue, respectively. (Best viewed in color.) the effect of the number of recurrent steps in Figure 3(b). It can be observed that the performance of SDRN increases first and then becomes steady or slightly declining as the step number increases. For 15-Res, the limitation of training data may be the cause of performance decline. And the best results are generally obtained with two steps on three datasets, indicating that SDRN with two steps is enough to exploit the interaction information. 4.7 Visualization and Case Study In order to verify the relation detection capability of SDRN, we visualize the attention scores in Figure 4. It is shown that SDRN can accurately capture the 6523 relations between aspects and opinion expressions, even with complex reviews. To clearly analyze the effect of the joint learning and the synchronization unit, some predictions of SDRN, ‘SDRN w/o ESM&RSM’ and SPAN+RD are listed in Table 5. It can be concluded that SPAN+RD suffers the problem of error propagation. For example, it divides ‘selection of food’ into ‘selection’ and ‘food’ in Review #2, and misses ‘laid-back’ in Review #3. With the pipeline way, it is impossible to obtain a correct pair once there is an incorrect extraction of entities at the first step. Due to the lack of information interaction, ‘SDRN w/o ESM&RSM’ is generally faced with relation detection errors when relations are complex. For example, it extracts error pair (receiver, superlatives) in Review #1, and fails to detect the relations between ‘decor’ and ‘laid-back’ in Review #3. In contrast, our model can effectively avoid the above problems. 5 Conclusion In this paper, we explored Aspect-Opinion Pair Extraction (AOPE) task and proposed Synchronous Double-channel Recurrent Network (SDRN). Specifically, the opinion entity extraction unit and the relation detection unit are designed to extract aspects, opinion expressions and their relations simultaneously. The two units update themselves in a recurrent manner and form two channels, respectively. Meanwhile, the synchronization unit is devised to integrate high-level interaction information and enable the mutual benefit on opinion entity extraction and relation detection. Extensive experiments showed that our model achieves stateof-the-art performances. Acknowledgments This research is supported by the National Natural Science Foundation of China under grant No. 61976119, the Natural Science Foundation of Tianjin under grant No. 18JCYBJC15800, and the Major Program of Science and Technology of Tianjin under grant No. 18ZXZNGX00310. References Hongliang Dai and Yangqiu Song. 2019. Neural aspect and opinion term extraction with mined rules as weak supervision. In ACL 2019, pages 5268–5277. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL 2019, pages 4171–4186. Zhifang Fan, Zhen Wu, Xin-Yu Dai, Shujian Huang, and Jiajun Chen. 2019. Target-oriented opinion words extraction with target-fused neural sequence labeling. In NAACL 2019, pages 2509–2518. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2019. An interactive multi-task learning network for end-to-end aspect-based sentiment analysis. In ACL 2019, pages 504–515. Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li, and Yiwei Lv. 2019. Open-domain targeted sentiment analysis via span-based extraction and classification. In ACL 2019, pages 537–546. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In SIGKDD 2004, pages 168–177. Arzoo Katiyar and Claire Cardie. 2016. Investigating lstms for joint extraction of opinion entities and relations. In ACL 2016. Jason S. Kessler, Miriam Eckert, Lyndsie Clark, and Nicolas Nicolov. 2010. The 2010 icwsm jdpa sentment corpus for the automotive domain. In 4th International AAAI Conference on Weblogs and Social Media Data Workshop Challenge (ICWSM-DWC 2010). Roman Klinger and Philipp Cimiano. 2013a. Bidirectional inter-dependencies of subjective expressions and targets and their value for a joint model. In ACL 2013, pages 848–854. Roman Klinger and Philipp Cimiano. 2013b. Joint and pipeline probabilistic models for fine-grained sentiment analysis: Extracting aspects, subjective phrases and their relations. In ICDM 2013, pages 937–944. John D. Lafferty, Andrew Mccallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, pages 282–289. Xin Li, Lidong Bing, Piji Li, Wai Lam, and Zhimou Yang. 2018. Aspect term extraction with history attention and selective transformation. In IJCAI 2018, pages 4194–4200. Xin Li and Wai Lam. 2017. Deep multi-task learning for aspect term extraction with memory interaction. In EMNLP 2017, pages 2886–2892. 6524 Kang Liu, Liheng Xu, and Jun Zhao. 2012. Opinion target extraction using word-based translation model. In EMNLP 2012, pages 1346–1356. Kang Liu, Liheng Xu, and Jun Zhao. 2015. Coextracting opinion targets and opinion words from online reviews based on the word alignment model. IEEE Trans. Knowl. Data Eng., 27(3):636–650. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment analysis. In NAACL 2015, pages 486–495. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In COLING 2014, pages 27–35. Ana-Maria Popescu and Oren Etzioni. 2005. Extracting product features and opinions from reviews. In EMNLP 2005, pages 339–346. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS 2017, pages 5998–6008. Wenya Wang and Sinno Jialin Pan. 2019. Transferable interactive memory network for domain adaptation in fine-grained opinion extraction. In AAAI 2019, pages 7192–7199. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016. Recursive neural conditional random fields for aspect-based sentiment analysis. In EMNLP 2016, pages 616–626. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2017. Coupled multi-layer attentions for co-extraction of aspect and opinion terms. In AAAI 2017, pages 3316–3322. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 39(2-3):165–210. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, and et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144. Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2018. Double embeddings and cnn-based sequence labeling for aspect extraction. In ACL 2018, pages 592–598. Liheng Xu, Kang Liu, Siwei Lai, Yubo Chen, and Jun Zhao. 2013. Mining opinion words and opinion targets in a two-stage framework. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9 August 2013, Sofia, Bulgaria, Volume 1: Long Papers, pages 1764–1773. The Association for Computer Linguistics. Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In ACL 2013, pages 1640–1649. Yichun Yin, Furu Wei, Li Dong, Kaimeng Xu, Ming Zhang, and Ming Zhou. 2016. Unsupervised word and dependency path embeddings for aspect term extraction. In IJCAI 2016, pages 2979–2985. Jianfei Yu, Jing Jiang, and Rui Xia. 2019. Global inference for aspect and opinion terms co-extraction based on multi-task neural networks. IEEE/ACM Trans. Audio, Speech & Language Processing, 27(1):168–177. Li Zhuang, Feng Jing, and Xiaoyan Zhu. 2006. Movie review mining and summarization. In CIKM 2006, pages 43–50.
2020
582
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6525–6535 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6525 Clue: Cross-modal Coherence Modeling for Caption Generation Malihe Alikhani Rutgers University [email protected] Piyush Sharma Google Research [email protected] Shengjie Li Rutgers University [email protected] Radu Soricut Google Research [email protected] Matthew Stone Rutgers University [email protected] Abstract We use coherence relations inspired by computational models of discourse to study the information needs and goals of image captioning. Using an annotation protocol specifically devised for capturing image–caption coherence relations, we annotate 10,000 instances from publicly-available image–caption pairs. We introduce a new task for learning inferences in imagery and text, coherence relation prediction, and show that these coherence annotations can be exploited to learn relation classifiers as an intermediary step, and also train coherence-aware, controllable image captioning models. The results show a dramatic improvement in the consistency and quality of the generated captions with respect to information needs specified via coherence relations. 1 Introduction The task of image captioning is seemingly straightforward to define: use natural language to generate a description that captures the salient content of an image. Initial datasets, such as MSCOCO (Lin et al., 2014) and Flickr (Young et al., 2014), approached this task directly, by asking crowd workers to describe images in text. Unfortunately, such dedicated annotation efforts cannot yield enough data for training robust generation models; the resulting generated captions are plagued by content hallucinations (Rohrbach et al., 2018; Sharma et al., 2018) that effectively preclude them for being used in real-world applications. In introducing the Conceptual Captions dataset, Sharma et al. (2018) show that this dataset is large enough, at 3.3M examples, to significantly alleviate content hallucination. However, because the technique for creating such a large-scale resource relies on harvesting existing data from the web, it no longer guarantees consistent image–text relations. For example, along with descriptive captions Figure 1: Output of a coherence-aware model for various coherence relations. Content that establishes the intended relation is underlined. (Photo credit: Blue Destiny / Alamy Stock Photo) Visible: horse and rider jumping a fence. Meta: horse and rider jumping a fence during a race. Subjective: the most beautiful horse in the world. Story: horse competes in the event. (e.g.,“this is a person in a suit”), the dataset also includes texts that provide contextual background (e.g., “this is the new general manger of the team”) and subjective evaluations (e.g., “this is stylish”). As a result, current captioning models trained on Conceptual Captions avoid content hallucination but also introduce different, more subtle and harderto-detect issues related to possible context hallucinations (i.e., is this actually the new general manager?) or subjective-judgement hallucinations (i.e., whose judgment is this anyway?). In this paper, we propose to tackle this issue of large-scale image-caption consistency using a coherence-aware approach inspired by the framework of discourse coherence theory (Hobbs, 1978; Phillips, 1977). This framework characterizes the inferences that give discourse units a coherent joint interpretation using a constrained inventory of coherence relations. In multimodal presentations, discourse units can be images as well as text, so we appeal to new image–text coherence relations that 6526 capture the structural, logical, and purposeful relationships between the contributions of the visual modality and the contributions of the textual modality. For instance, a Visible relation characterizes grounding texts that serve to make key aspects of the image content common ground (perhaps to a visually-impaired reader), analogous to Restatement relations between one text unit and another; Visible relations are key to traditional descriptive captions such as “this is a person in a suit.” Meanwhile, a Story relation characterizes texts that develop the circumstances depicted in the image in pursuit of free-standing communicative goals, analogous to Occasion or Narration relations in text; Story relations can go far beyond image content (“I hiked this mountain as we found it on a list for good hikes for kids”) and so pinpoint one kind of risk for context hallucinations. The key contribution of our work is to show that image–text coherence can be systematized, recognized, and used to control image captioning models. To support our argument, we create a coherencerelation annotation protocol for image-caption pairs, which we use to annotate 10,000 imagecaption pairs over images coming from the Conceptual Captions (Sharma et al., 2018) and Open Images (Kuznetsova et al., 2020) datasets. We release1 this dataset, named Clue, to facilitate follow-up research. By annotating these coherence relations in the context of image captioning, we open up the possibility of analyzing patterns of information in image–text presentations at web scale. In addition, we show that we can exploit these coherence-relation annotations by training models to automatically induce them, as well as by building models for coherence-aware image captioning. Because they are driven by input coherence relations, these captioning models can be used to generate captions that are better suited to meet specific information needs and goals. 2 Prior Work There are diverse ways to characterize the communicative functions of text and images in multimodal documents (Marsh and Domas White, 2003), any of which can provide the basis for computational work. Some studies emphasize the distinctive cognitive effects of imagery in directing attention; engaging perceptual, spatial and embodied 1https://github.com/malihealikhani/Crossmodal Coherence Modeling reasoning; or eliciting emotion (Kruk et al., 2019; Shuster et al., 2019). Some look at contrasts across style and genre (Guo et al., 2019). Others look holistically at the content of text and imagery as complementary or redundant (Otto et al., 2019; Vempala and Preotiuc-Pietro, 2019). Unlike our approach, none of these methodologies attempt to characterize information-level inferences between images and text, so none is suitable for building generation models that control the information that text provides. While coherence theory has been applied to a range of multimodal communication, including comics (McCloud, 1993), gesture (Lascarides and Stone, 2009), film (Cumming et al., 2017), and demonstrations and other real-world events (Hunter et al., 2018; Stojnic et al., 2013), applying coherence theory specifically to text–image presentations is less well explored. The closest work to ours is Alikhani et al. (2019), who explore coherence relations between images and text in a multimodal recipe dataset. Their relations are specialized to instructional discourse and they do not build machine learning models combining imagery and text. We consider more general coherence relations and a broader range of machine learning methods. We use our relations and introduce a coherenceaware caption generation model that improves the rate of good Visible captions by around 30%. This is a considerable improvement over the recent models that have tried to achieve more control over neural language generation using an enhanced beam search (Anderson et al., 2017), a memory network with multiple context information (Chunseong Park et al., 2017), forced attentions (Sadler et al., 2019) and modeling and learning compositional semantics using fine-grained annotations of entities in MSCOCO (Cornia et al., 2019). 3 Coherence in Images and Captions The first step toward our goals is to characterize image–text coherence and annotate a sizable corpus of image–text pairs with coherence relations. We use an overlapping set of high-level relations, inspired both by theoretical work linking discourse coherence to discourse structure and discourse goals (Roberts, 2012; Webber et al., 1999), and by previous successful discourse annotation campaigns (Prasad et al., 2008). Crucially, following previous work on text (Rohde et al., 2018) and multimodal discourse (Alikhani et al., 2019), 6527 Visible, Meta (a) CAPTION: forest on a sunny day Visible, Action, Subjective (b) CAPTION: young happy boy swimming in the lake. Meta, Action, Story (c) CAPTION: approaching our campsite, at 1550m of elevation on the slopes. Irrelevant (d) CAPTION: young girl walking on the dry grass field under daylight. Figure 2: We use a constrained set of coherence relations to summarize the structural, logical and purposeful relationships between the contributions of text and the contributions of images. Multiple coherence relations can be found simultaneously. (Image–caption pairs are chosen from the Conceptual Caption dataset; photo credits: Dmytro Zinkevych; Shutterstock user yauhenka; Danilo Hegg; Andre Seale) we assume that several of these relations can hold concurrently. The relations are: • Visible, where text presents information that is intended to recognizably characterize what is depicted in the image, analogous to Restatement relations in text (Prasad et al., 2008). • Subjective, where the text describes the speaker’s reaction to, or evaluation of, what is depicted in the image, analogous to Evaluation relations in text (Hobbs, 1985); • Action, where the text describes an extended, dynamic process of which the moment captured in the image is a representative snapshot, analogous to Elaboration relations in text (Prasad et al., 2008); • Story, where the text is understood as providing a free-standing description of the circumstances depicted in the image, analogous to the Occasion relation of Hobbs (1985) but including instructional, explanatory and other background relations; and • Meta, where the text allows the reader to draw inferences not just about the scene depicted in the image but about the production and presentation of the image itself, analogous to Meta-talk relations in text (Schiffrin, 1980). Figures 2(a), (b) and (c) show examples of image–caption pairs and the associated coherence relations. We can see that image–caption pairs often have multiple relations. For completeness, we also present in Figure 2(d) an example of an image– caption pair that does not fall into any of the above categories (and it is therefore labeled Irrelevant). 3.1 Data Collection Clue includes a total of 10,000 annotated image– caption pairs. A first subset of 5,000 image–caption pairs was randomly selected from the training split of the Conceptual Captions dataset (Sharma et al., 2018), as a representative sample of humanauthored image captions. The Conceptual Captions dataset is a collection of web-harvested images paired with their associated ALT-TEXT, created by human authors under various non-public guidelines (regarding style, objectivity, etc.) for over 111,000 web pages including news articles, advertisements, educational posts, blogs, etc. A second subset of 5,000 image–caption pairs, to be used as a representative sample of machineauthored captions, is obtained from the outputs of 5 of the top models that participated in the imagecaptioning challenge for the Conceptual Caption Workshop at the 2019 Conference on Computer Vision and Pattern Recognition (CVPR). These machine-authored captions are over a set of 1,000 images from the Open Images Dataset (Kuznetsova et al., 2020), and are publicly available.2 Protocol Although specific inferences have been shown to be realizable by crowd workers (Alikhani et al., 2019), the results of our pilot studies for annotating these more general relations with the help of crowd workers were not satisfactory. We have found that expert raters’ decisions, however, have high agreement on our discourse categories. The study has been approved by Rutgers’s IRB; the annotators, two undergraduate linguistics students, were paid a rate of $20/h. In our annotation protocol, we ask the annotators to label the main relations described in Section 3, as well as certain fine-grained sub-relations. The following briefly summarizes our guidelines; our GitHub repository includes an exact copy of what 2http://www.conceptualcaptions.com/winners-and-data 6528 the annotators used. Annotations of Visible are given for captions that present information intended to recognizably characterize what is depicted in the image, while annotations of Meta indicate not only information about the scene depicted but also about the production and presentation of the image itself. The Meta labels have additional fine-grained labels such as When, How, and Where. A few details regarding these fine-grained labels are worth mentioning: location mentions such as “in the city” are labeled as Meta-Where, but generic states, e.g., “in the snow,” are merely annotated as Visible. Captions considering the view or the photo angles, or a photos composition, i.e. “portrait” or “close-up”, are annotated as Meta-How. Annotations of Subjective are primarily given for captions that included phrases with no objective truth value, i.e. phrases using predicates of personal taste. For example, captions including noun phrases like “pretty garden” are annotated as Subjective: whether the garden is pretty or not cannot be determined except by appeal to the opinions of an implicit judge. Note that first-person reports, like “I want ...” or “I need ...” are not annotated as Subjective but rather as Story, because they describe the speaker’s definite state rather than an implicit judgment. Captions annotated as Story cover a much wider range compared to captions in other categories, including Meta and Subjective. These captions range from those that read like instructions, i.e. “how to ...”, to those that present speaker desires, i.e. “I want ...” or “I need ...”, to those that give background information not captured in the image, i.e. “she is an actress and model”, and more. Other and Irrelevant Some of these image– caption pairs contain incomplete captions that are hard to understand. A number of these examples include images that contained text. The text in these cases is relevant to the image and the accompanying captions; in this cases, the coherence relations are marked as Other–Text (Figure 3). Some examples of such instances are images containing signs with text, greetings on cards, or text that does not affect the interpretation of the image or caption, such as city names or watermarks. Other times, the caption text is irrelevant and indicate that the image and caption do not correlate. Some examples of these instances are captions of “digital art selected for” paired with an irrelevant Other–Text (a) CAPTION: a gardener may water the plant daily but fruits grow only in the season. Other–Gibberish (b) CAPTION: actor in retail at the mother. Figure 3: Examples of image–caption pairs in the Other category. (Photo credit: santabanta.com; Mary Sollosi) image, and images that clearly do not match the caption, such as an image of a man walking with the caption “a field of strawberries”. We have specifically labeled cases where the caption is almost true or almost relevant to the image at hand, such as the caption “horses in a field” with an image containing donkeys with “minor error”. Other cases include images that look like powerpoint slides with bullets and text. Our GitHub repository includes detailed examples and explanations. Experiment Interface We have developed software for annotating coherence relations in image– text presentations that can flexibly and easily accommodate various annotation schema. The annotators used this software for annotating the image– text pairs. They had the option of choosing multiple items and leaving comments. Agreement To assess the inter-rater agreement, we determine Cohens κ. For this, we randomly chose 300 image–caption pairs from the Conceptual Caption ground-truth data and assigned them to two annotators. The resulting κ coefficient is 0.81, which indicates a high agreement on these categorical decisions. 3.2 Analysis In this section we present the overall statistics of the dataset annotations, the limitations of the captiongeneration models, and the correlation of the distribution of the coherence relations with genre. Overall statistics The exact statistics over the resulting annotations are presented in Table 1 and Table 2. Overall, Visible captions constitute around 65% and 70% of captions for the ground-truth labels and the model outputs, respectively. The rate 6529 Visible Subjective Action Story Meta Irrelevant Ground-truth 64.97% 9.77% 18.77% 29.84% 24.59% 3.09% Model output 69.72% 1.99% 11.22% 17.19% 58.94% 16.97% Ground-truth + Model 66.91% 6.58% 15.68% 24.67% 38.65% 8.77% Table 1: Distribution of coherence relations over the ground-truth and the model outputs. When How Where Ground-truth 33.74% 64.40% 28.60% Model output 21.75 % 72.84% 41.03% Table 2: Distribution of fine-grain relations in the Meta category over the ground-truth and the model outputs. of Subjective and Story captions decreases significantly for the model outputs (compared to groundtruth), indicating that the models learn to favor the Visible relation at the expense of Subjective and Story. However, the rate of Meta captions increases by around 25% in the model outputs, which points to potential context hallucination effects introduced by these models. As expected, the rate of Irrelevant captions increases to around 17% in the model-generated captions, compared to 3% in the ground-truth captions. Moreover, it appears that the models have some ability to learn to generate the locations that events take place; however, there is a drop in their ability to generate temporal information (see Table 2). In terms of overlap, Visible and Meta overlap 22.49% of the time for the ground-truth captions, whereas this rate goes up to 54.55% in the model outputs. This “conflation” of these two relations is highly problematic, and one of the main motivations for building caption-generation models that have control over the type of discourse relation they create (see Section 5). Our GitHub page includes additional statistics about overlapping relations. Coherence relations indicate Genre Coherence relations are indicative of the discourse type and its goals, and therefore our annotations correlate with the genre under which the captions have been produced. That is, image–caption pairs from different publication sources have different distributions of coherence relations. For instance, pairs from the Getty Images domain mostly come with the Meta and Visible relations. In contrast, from the Daily Mail domain are mostly story-like, and include very few captions that describe an action, compared with the Getty Images and picdn domains. Figure 4 shows the distribution of the coherence labels for the top four domains from the Conceptual Caption dataset. Figure 4: Different resources have different kinds image–caption pairs. The graph shows the distribution of labels in the top four domains present in the Conceptual Captions dataset. 4 Predicting Coherence Relations In this section, we introduce the task of predicting cross-modal coherence relations. We describe a number of preliminary experiments that justify the potential of machine learning models in classifying coherence relations in text and imagery. To this end, we train and test different models on the Clue dataset to automatically predict the coherence labels given an image and its caption. 4.1 Multi-Label Prediction We first treat the relation prediction problem in its original multi-label setting. The train–test split for all the models described in this section is 80%– 20% and the numbers are reported using 5-fold cross validation. As a baseline, we report the results of a SVM classifier that uses only the text to predict the relationship between image-caption pairs. We extract bag-of-words features by using N-grams (for N from 1 to 5), and pass them to the SVM classifier 6530 Visible Subjective Action Story Meta Irrelevant Weighted SVM (text-only) 0.83 0.12 0.32 0.21 0.19 0.00 0.48 GloVe (text-only) 0.80 0.44 0.58 0.57 0.44 0.08 0.63 BERT (text-only) 0.82 0.35 0.62 0.62 0.44 0.06 0.65 GloVe + ResNet 0.81 0.36 0.58 0.60 0.45 0.07 0.64 BERT + ResNet 0.83 0.36 0.69 0.62 0.44 0.06 0.67 Table 3: The F1 scores of the multi-class classification methods described in Section 4.1; 80-20 train-test split; 5-fold cross validation. as input. Next, we discuss two multi-modal classifiers for predicting the image–caption coherence relations. GloVe + ResNet-50 This model contains a text encoder for textual-feature extraction and an image encoder for image-feature extraction. For the image encoder, we use a ResNet-50 (He et al., 2016) pre-trained on ImageNet followed by a BatchNorm layer, a fully connected layer and a ReLU activation function. The text encoder takes as input word embeddings from the GloVe model (Pennington et al., 2014), and consists of an LSTM layer, a Batch-Norm layer, a fully connected layer with tanh activation function. BERT + ResNet-50 To test the impact of the text encoder in this setup, we reuse the setup of the previous model with a different textual-feature extractor. We train and test using an encoder that takes sentence embeddings as input using the ⟨CLS⟩representation produced by the BERT-base model (Devlin et al., 2018). Results The results of all of our models are presented in Table 3, where we present the F1 scores over each of the individual relations, as well as an overall weighted average. The BERT+ResNet model achieves the highest performance (|t| > 9.54, p < 0.01), with an overall F1 score of 0.67. For the interested reader, we present in the GitHub page the top features of the Naive Bayes SVM classifier (Wang and Manning, 2012). 4.2 Single-Label Prediction To achieve the goal of generating captions with a desired coherence relation to the image, it is important to clearly differentiate between often cooccurring label types (such as Visible and Meta). To this end, we introduce a label-mapping strategy for predicting coherence relations, such that each image–caption pair is assigned a single coherence label. We map the set of human-annotated coherence relations for an image–caption pair to a single label using the following heuristic: 1. If the set contains the Meta label, then the image–caption pair is assigned the Meta label. 2. If the set contains the Visible label and does not contain either Meta or Subjective, then the image–caption pair is set to Visible. 3. If none of the above rules are met for this image–caption pair, we randomly sample a label from its set of labels. The distribution of labels after this mapping is given in the first row of Table 4. As opposed to the ground-truth label distribution in Table 1, these values add up to 100%. Using the label mapping described above, we retrain and evaluate the BERT+ResNet classifier presented in Sec. 4.1. In addition, we perform additional experiments in which the caption text in encoded using the pre-trained Universal Sentence Encoder3 (USE) (Cer et al., 2018), which returns a 512-dimensional embedding for the text. On the image encoding side, we also experiment with the pre-trained Graph-Regularized Image Semantic Embedding model (Juan et al., 2020), which is trained over ultra-fine–grained image labels over web-sized amounts of data – roughly 260M examples over roughly 40M labels; this model returns a compact, 64-dimensional representation for the image. We concatenate the text and image features into a single vector, and feed it to a fully-connected neural network with 3 hidden layers of 256 units each with ReLU activations (for all but the last one), followed by a softmax layer which computes the logits for the 6 target classes. We divide the 3910 labeled image–text pairs from the groundtruth split of our data into training and test sets, with 3400 and 510 samples, respectively. We use dropout with probability of 0.5, and tune the model 3tfhub.dev/google/universal-sentence-encoder-large/3 6531 Visible Subjective Action Story Meta Irrelevant Weighted Ground-truth Distribution 46.65% 7.07% 1.31% 19.09% 23.42% 2.46% BERT + ResNet 0.64 0.26 0.02 0.52 0.46 0.07 0.52 BERT + GraphRise 0.59 0.15 0.00 0.42 0.34 0.00 0.45 USE + GraphRise 0.69 0.45 0.00 0.57 0.48 0.00 0.57 Table 4: The F1 scores of coherence relation classifiers with label mapping. The aggregated Weighted scores use the numbers in the first row as weights. parameters using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 10−6. Results Table 4 shows the results of the singlelabel prediction experiments, where we present the F1 scores over each of the individual relations, as well as an overall weighted average. The USE+GraphRise model using the label mapping achieves the highest performance, with an overall F1 score of 0.57. Next, we describe how we use this classifier’s predictions to annotate the training and validation splits of the Conceptual Caption dataset (3.3 million image–captions pairs), in order to train a controllable caption-generation model. 5 Generating Coherent Captions We use the coherence label predictions on the Conceptual Captions dataset (Section 4) to train a coherence-aware caption generation model. Transformer Encoder Transformer Decoder Image Features Extractor Object Classifier Coherence Label Start Token Image Coherence-Aware Caption Trainable Components Pre-trained Components Model Outputs Model Inputs Figure 5: Coherence-aware image captioning model Model We model the output caption using a sequence-generation approach based on Transformer Networks (Vaswani et al., 2017). The output is the sequence of sub-tokens comprising the target caption. The input is obtained by concatenating the following features. Image Features We obtain a 64 dimensional representation for the image using the Graph-RISE (?) feature extractor, which employs a ResNet-101 network to classify images into some 40M classes. We do not fine tune this image encoder model. We use the 64-dimensional feature available immediately before the classification layer, and embed into the Transformer encoder embedding space using a trainable dense layer. Detected Objects We obtain object labels for the image using Google Cloud Vision API.4 We represent each label using pre-trained 512-dimensional vectors trained to predict co-occurring objects on web pages, in a similar fashion as the word2vec model (Mikolov et al., 2013). We embed each of these into the Transformer encoder embedding space using a trainable dense layer. Coherence relation label This is an input label fed at training time, for which we use the inferred coherence relation for the image–caption pair; at inference time, the label input is used to control the information in the generated caption. Embeddings for the coherence labels are trainable model parameters. Additionally, the relationship label serves as the start token for the Transformer decoder (Figure 5), i.e., it is made available both for the encoder network and directly for the decoder network. When training and evaluating a coherence-agnostic model, this label is set to a special symbol, such as NONE, essentially running the model without coherence information. For all models described in this paper, the Transformer network has 6 encoder layers, 6 decoder layers, 8 attention heads, and a 512-dimensional embedding space. 6 Results and Evaluation In what follows, we discuss evidence for our hypotheses: (a) a coherence-aware model presents information that is aligned with the goal of the 4cloud.google.com/vision 6532 Coherence agnostic Visible coherence-aware Subjective coherence-aware Story coherence-aware Meta coherence-aware Visible 52.1% 79.9% 31.7% 25.0% 42.80% Subjective 11.4% 2.6% 24.4% 2.6% 1.9% Action 10.7% 10.8% 6.3% 8.8% 11.4% Story 51.3% 16.0% 45.0% 58.8% 17.34% Meta 31.2% 32.8% 15.1% 17.7% 46.5% Irrelevant 12.2% 12.3% 10.7% 9.9% 21.40% When 9.5% 5.6% 4.1% 17.7% 9.6% How 21.3% 21.3% 9.6% 25.0% 30.26% Where 5.3% 8.6% 4.1% 8.8% 16.6% Table 5: The distribution of coherence relations in image–caption pairs when captions are generated with the discourse–aware model vs the discourse agnostic model (the mode of the distribution in bold). (a) coherence-aware Meta: A girl in the winter forest. coherence–agnostic: beautiful girl in a red dress. (b) coherence-aware Visible: the pizza at restaurant is seen. coherence–agnostic: the best pizza in the world. (c) coherence-aware Subjective: beautiful chairs in a room. coherence–agnostic: the living room of the home. (d) coherence-aware Story: how to spend a day. coherence–agnostic: dogs playing on the beach. Figure 6: Captions generated by the coherence-aware and coherence-agnostic models. (Photo credits: YesVideo; TinnaPong; Sok Chien Lim; GoPro) discourse; and (b) a coherence-aware model can significantly improve caption quality. Evaluation by expert annotators We train the model described above with the predicted discourse relation labels for image–caption pairs in the Conceptual Captions training and validation sets. The checkpoint with highest CIDEr (Vedantam et al., 2015) score on the validation set is selected for inference and human evaluations. We asked our annotators to annotate a subset of randomly selected image–caption pairs generated by this model. These evaluation images were selected from the Conceptual Captions evaluation set based on their predicted coherence label using the single-label classifier (Section 4) on the captions generated by the coherence-agnostic model (Section 5). According to our sensitivity power analysis, with a sample size of 1500 image–text pairs, 300 in each category, we are able to detect effect sizes as small as 0.1650 with a power and significance level of 95%. Table 5 shows the result distributions for the coherence-agnostic and coherence-aware model. Differences greater than 3% are statistically significant with (p < 0.05, t > 2.5). The ability to control the generated caption using an input coherence relation is clear: when asking for Visible (the column under Visible), 79.85% of the captions are evaluated to fit the Visible label (non-overlapping), an absolute increase of 27.7% over the coherenceagnostic model (with only 52.09% Visible); at the same time, the rate of Story and Subjective captions reduces significantly. This reduction is particularly noteworthy in the light of eliminating potential context hallucinations, which are likely to be found under the Story and Subjective labels. A similar trend is observed when asking for, e.g., Meta: 46.49% of the captions are evaluated to fit the Meta label (non-overlapping; the column under Meta), up 15.3% over the coherence-agnostic model (with 31.18% Story). A qualitative analysis of the generated captions shows that captions generated under the Meta label include terms such as “screenshot” and “view”, while Subjective captions come with adjectives such as “beautiful” or “favorite”. Figure 6 shows several examples. 6533 Crowdsouring and Automatic Metrics For the following experiments, a subset of the Conceptual Captions validation data was selected where the ground-truth captions are labeled as Visible. To compare the quality of the generated captions using our framework with other models, we follow the same crowdsourcing protocol that Sharma et al. (2018) employed for quality assessment. We asked subjects whether the generated captions are “good” or not. 86% of the captions generated by the coherence-aware model were selected as “good” captions, whereas only 74% of the captions generated by the coherence-agnostic model were selected as “good” captions. Note that, based on the human-evaluation data published5 for the Conceptual Caption Workshop at CVPR 2019, this rate is on average 67% “good” captions for the participating state-of-the-art models in 2019. Furthermore, in a follow-up experiment we ask subjects to choose between a caption generated by the coherence-aware model and one generated by the coherence-agnostic model: 68.2% of the time subjects preferred the coherence-aware result, versus 31.8% for the coherence-agnostic one. In addition, we study the quality and the relevance of the captions generated by our model as suggested by (van der Lee et al., 2019). On a scale of 0 to 5, the average scores of the quality of the captions generated by the coherence-aware and the coherence-agnostic model are, respectively, 3.44 and 2.83. The average score of the relevance for the coherence-aware and the coherence-agnostic conditions are, respectively, 4.43 and 4.40. Note that subjects rated the quality and the relevance of the captions while seeing the questions on the same page. Screenshots and code for the experiments can be found on our GitHub page. With the exception of the relevance condition, the results of the other questions that we asked in the crowdsourcing experiments are statistically significantly different (p < 0.05, t > |3.1|), which indicates that subjects prefer captions generated by the coherence-aware model. We also mention here that this difference in quality, albeit significant from a human-rating perspective, is not reflected in the CIDEr score computed on the same data (against the available reference captions). The CIDEr score of the captions generated by the coherence-aware and the coherence-agnostic models are, respectively, 0.958 and 0.964. This is not surprising, as 5http://www.conceptualcaptions.com/winners-and-data the reference captions used by CIDEr are subject to the same distribution over coherence relations as the rest of the data, and therefore generating caption outputs with a different coherence-relation distribution (Table 5) is unlikely to have a positive impact on reference-driven metrics such as CIDEr. 7 Conclusions and Future Work Representing coherence in image–text presentations can provide a scaffold for organizing, disambiguating and integrating the interpretation of communication across modalities. We show that cross-modal coherence modeling significantly improves the consistency and quality of the generated text with respect to information needs. This is a step forward towards designing systems that learn commonsense inferences in images and text and use that to communicate naturally and effectively with the users. In addition, the presented dataset, Clue, provides opportunities for further theoretical and computational explorations. The experiments described for the coherence relation prediction task set the stage for designing better models for inferring coherence for images–text pairs. The presented work has limitations that can be addressed in future research. According to the description of the Conceptual Captions dataset, its captions have been hypernymized. However, by studying the examples in the Other category, we discovered an additional coherence relation that exists between an image and caption, in which the caption identifies an object or entity in the image– Identification. Examples of this relation involves a caption that mentions the brand of a product or the name of the person in the image. Identification is easy to annotate but missing from this work due to the properties of the corpus we annotated. Future work should study this additional relation in the context of caption annotation and generation. Acknowledgement The research presented here is supported by NSF Awards IIS-1526723 and CCF-19349243 and through a fellowship from the Rutgers Discovery Informatics Institute. Thanks to Gabriel Greenberg and the anonymous reviewers for helpful comments. We would also like to thank the Mechanical Turk annotators for their contributions. We are grateful to our data annotators, Ilana Torres and Kathryn Slusarczyk for their dedicated work. 6534 References Malihe Alikhani, Sreyasi Nag Chowdhury, Gerard de Melo, and Matthew Stone. 2019. Cite: A corpus of image-text discourse relations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 570–575. Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2017. Guided open vocabulary image captioning with constrained beam search. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 936–945, Copenhagen, Denmark. Association for Computational Linguistics. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 169–174, Brussels, Belgium. Association for Computational Linguistics. Cesc Chunseong Park, Byeongchang Kim, and Gunhee Kim. 2017. Attend to you: Personalized image captioning with context sequence memory networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Marcella Cornia, Lorenzo Baraldi, and Rita Cucchiara. 2019. Show, control and tell: A framework for generating controllable and grounded captions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8307–8316. Samuel Cumming, Gabriel Greenberg, and Rory Kelly. 2017. Conventions of viewpoint coherence in film. Philosophers’ Imprint, 17(1):1–29. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Longteng Guo, Jing Liu, Peng Yao, Jiangwei Li, and Hanqing Lu. 2019. Mscap: Multi-style image captioning with unpaired stylized text. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4204–4213. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. Jerry R Hobbs. 1978. Why is discourse coherent. Technical report, SRI INTERNATIONAL MENLO PARK CA. Jerry R. Hobbs. 1985. On the coherence and structure of discourse. Technical report, Center for the Study of Language and Information, Stanford University. Julia Hunter, Nicholas Asher, and Alex Lascarides. 2018. A formal semantics for situated conversation. Semantics and Pragmatics. Da-Cheng Juan, Chun-Ta Lu, Zhen Li, Futang Peng, Aleksei Timofeev, Yi-Ting Chen, Yaxi Gao, Tom Duerig, Andrew Tomkins, and Sujith Ravi. 2020. Ultra fine-grained image semantic embedding. In Proceedings of the 13th International Conference on Web Search and Data Mining, WSDM 20, page 277285, New York, NY, USA. Association for Computing Machinery. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Julia Kruk, Jonah Lubin, Karan Sikka, Xiao Lin, Dan Jurafsky, and Ajay Divakaran. 2019. Integrating text and image: Determining multimodal document intent in Instagram posts. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4614–4624, Hong Kong, China. Association for Computational Linguistics. Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, et al. 2020. The open images dataset v4. International Journal of Computer Vision, pages 1– 26. Alex Lascarides and Matthew Stone. 2009. A formal semantic analysis of gesture. Journal of Semantics, 26(4):393–449. Chris van der Lee, Albert Gatt, Emiel van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th International Conference on Natural Language Generation, pages 355–368, Tokyo, Japan. Association for Computational Linguistics. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer. Emily E Marsh and Marilyn Domas White. 2003. A taxonomy of relationships between images and text. Journal of Documentation, 59(6):647–672. Scott McCloud. 1993. Understanding comics: The invisible art. William Morrow. 6535 Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119. Christian Otto, Matthias Springstein, Avishek Anand, and Ralph Ewerth. 2019. Understanding, categorizing and predicting semantic image-text relations. In Proceedings of the 2019 on International Conference on Multimedia Retrieval, pages 168–176. ACM. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. B Phillips. 1977. A calculus of cohesion. In Fourth LACUS Forum, Montreal, Canada. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind K Joshi, and Bonnie L Webber. 2008. The Penn discourse treebank 2.0. In LREC. Citeseer. Craige Roberts. 2012. Information structure: Towards an integrated formal theory of pragmatics. Semantics and Pragmatics, 5:6–1. Hannah Rohde, Alexander Johnson, Nathan Schneider, and Bonnie Webber. 2018. Discourse coherence: Concurrent explicit and implicit relations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2257–2267, Melbourne, Australia. Association for Computational Linguistics. Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. 2018. Object hallucination in image captioning. arXiv preprint arXiv:1809.02156. Philipp Sadler, Tatjana Scheffler, and David Schlangen. 2019. Can neural image captioning be controlled via forced attention? In Proceedings of the 12th International Conference on Natural Language Generation, pages 427–431, Tokyo, Japan. Association for Computational Linguistics. Deborah Schiffrin. 1980. Meta-talk: Organizational and evaluative brackets in discourse. Sociological Inquiry, 50(34):199–236. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2556–2565. Kurt Shuster, Samuel Humeau, Hexiang Hu, Antoine Bordes, and Jason Weston. 2019. Engaging image captioning via personality. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Una Stojnic, Matthew Stone, and Ernest Lepore. 2013. Deixis (even without pointing). Philosophical Perspectives, 26(1):502–525. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. CIDEr: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566–4575. Alakananda Vempala and Daniel Preotiuc-Pietro. 2019. Categorizing and inferring the relationship between the text and image of twitter posts. In Proceedings of the 2019 Conference of the Association for Computational Linguistics. Sida Wang and Christopher D Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th annual meeting of the association for computational linguistics: Short papers-volume 2, pages 90–94. Association for Computational Linguistics. Bonnie Webber, Alistair Knott, Matthew Stone, and Aravind Joshi. 1999. Discourse relations: A structural and presuppositional account using lexicalised tag. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, pages 41–48. Association for Computational Linguistics. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. TACL, 2:67–78.
2020
583
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6536–6542 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6536 Knowledge Supports Visual Language Grounding: A Case Study on Colour Terms Simeon Sch¨uz Friedrich Schiller University Jena [email protected] Sina Zarrieß Friedrich Schiller University Jena [email protected] Abstract In human cognition, world knowledge supports the perception of object colours: knowing that trees are typically green helps to perceive their colour in certain contexts. We go beyond previous studies on colour terms using isolated colour swatches and study visual grounding of colour terms in realistic objects. Our models integrate processing of visual information and object-specific knowledge via hard-coded (late) or learned (early) fusion. We find that both models consistently outperform a bottom-up baseline that predicts colour terms solely from visual inputs, but show interesting differences when predicting atypical colours of so-called colour diagnostic objects. Our models also achieve promising results when tested on new object categories not seen during training. 1 Introduction Research on human perception has shown that world knowledge supports the processing of sensory information (Mitterer et al., 2009; Ishizu, 2013). For instance, humans have been found to use their knowledge about typical colours of an object when perceiving an instance of that object, in order to compensate for, e.g., perceptually challenging illumination conditions and achieve colour constancy (Mitterer and de Ruiter, 2008; Witzel and Gegenfurtner, 2018). Thus, the visual perception of object colours can be thought of as leveraging top-down knowledge for bottom-up processing of sensory input, in accordance with traditional approaches in psychology (e.g. Colman, 2009). The integration of visual information and world knowledge in perception, however, is far from obvious, with views ranging from processing through bidirectionally connected bottom-up and top-down components to the assumption that visual and conceptual representations themselves are inseparably intertwined (Kubat et al., 2009). Figure 1: Example object from VisualGenome with annotated colour attribute. The tree is described as “green”, despite of challenging illumination conditions. A lot of recent work in Language & Vision (L&V) has looked at grounding language in realistic sensory information, e.g. images of complex, real-world scenes and objects (Bernardi et al., 2016; Kafle and Kanan, 2017). In L&V, however, the use of top-down knowledge has mostly been discussed in the context of zero-shot or few-shot learning scenarios where few or no visual instances of a particular object category are available (Frome et al., 2013; Xian et al., 2018). 1 We present a simple experiment on language grounding that highlights the great potential of top-down processing even for very common words with a lot of visual instances: we learn to ground colour terms in visual representations of real-world objects and show that model predictions improve strongly when incorporating prior knowledge and assumptions about the object itself. We investigate visual grounding of colour terms by combining bottom-up and top-down modeling components based on early and late fusion strategies, reflecting different interpretations about the integration of visual and conceptual information in human perception. We find that these strategies lead to differ1Note that in L&V, the term “top-down” has recently been used in a different way in the context of attention models where it refers to systems that selectively attend to the output of a certain layer (Anderson et al., 2018). 6537 ent predictions, especially for atypical colours of objects that do have a strong tendency towards a certain colour.2 2 Related Work Even recent work on colour terms has mostly been using artificial datasets with descriptions of isolated colour swatches that show a single hue, primarily examining effects of context and conversational adequacy in colour naming (Baumgaertner et al., 2012; Meo et al., 2014; McMahan and Stone, 2015; Monroe et al., 2016, 2017; Winn and Muresan, 2018). However, object colours bear a range of additional challenges for perception and grounding: (i) chromatic variation due to lighting and shading (Witzel and Gegenfurtner, 2018), (ii) effects of conventionalization as in e.g. red hair (G¨ardenfors, 2004) and (iii) the inherent complexity of real-world objects (Witzel and Gegenfurtner, 2018), e.g. a tree with green leaves and a brown trunk is typically called green (see figure 1). In human cognition, several recalibration strategies support the constant perception of object colours given these challenges. In addition to bottom-up driven strategies like the chromatic adaption to situational sources of light, this also includes mechanisms such as the Memory Colour Effect: The automatic perception of canonical colours that accompanies the recognition of objects with characteristic hues (Olkkonen et al., 2008). Our aim in this work is to transfer knowledge-based recalibration mechanisms to the automatic classification of object colours. Mojsilovic (2005) and Van de Weijer et al. (2007) propose pixelwise approaches for modeling colour naming in natural images, accounting for factors such as illumination and non-uniform object colours. Van de Weijer et al. (2007) assign colour terms as labels to colour values of individual pixels and then average over these labels to obtain a colour term for an image region. We use their model as one of our baselines in Section 4. However, they do not take into account object-specific colour tendencies. Zarrieß and Schlangen (2016) classify colour histograms for objects in real-world images. They train object-specific classifiers that recalibrate a bottom-up classifier, but only obtain a small improvement from recalibration. We implement a general top-down component that can be 2Code and data for this project are available at: https://github.com/clause-jena/colour-term-grounding integrated with bottom-up processing in different ways. 3 Models We focus on the effect of knowledge in language grounding and adopt a slightly idealized setting for modeling: we assume that the object type is available during training and testing. Following e.g. Snoek et al. (2005); Gunes and Piccardi (2008); Baltrusaitis et al. (2019), we distinguish early and late fusion as a way of integrating modeling components with different sources of information. Figure 2 illustrates our models, which we describe below. BOTTOM-UP This component relies solely on sensory input and is implemented as a feed-forward network trained to predict colour terms from 3dimensional RGB histograms (representing the polychromatic distribution of colour values in complex objects). The output layer has a softmax over the 11 basic colour terms (Berlin and Kay, 1969). For comparability, we adopt the architecture in Zarrieß and Schlangen (2016) (Input Layer: 512 nodes, Hidden layers with 240 and 24 nodes and ReLU activation, output layer: 11 nodes, DropOut: 0.2). We did not obtain improvements when testing other colour spaces. We also tried visual features extracted with a neural object recognizer (Simonyan and Zisserman, 2014) which only give a small improvement over colour histograms. Thus, in Section 4, we report results only for RGB histograms, as they are more transparent as representations and do not include any conceptual information on objects. TOP-DOWN This component relies only on conceptual information about the object which consists of assignments of objects to object types and colour distributions for object types reflected in the data. Thus, this classifier predicts colour terms given only the object type, which is supposed to mimic the memory colour effect discussed in Section 2. We use (pre-trained) word embeddings for object types that are not fine-tuned during training. Hence, TOP-DOWN and the combined models can be tested on unseen object types. We use 100-dimensional pre-trained GloVe embeddings (Pennington et al., 2014). The embedding layer is followed by a hidden Layer (24 nodes, ReLU activation, drop-out set to 0.2). LATE-FUSION In this approach, BOTTOM-UP and TOP-DOWN compute their classification de6538 Figure 2: Late and Early Fusion cisions independently. The output probability distributions are interpolated using a constant factor (which is set to 1 in our case), i.e. we simply calculate their arithmetic mean. Hence, the integration of visual and conceptual information is hard-coded. EARLY-FUSION Object type embeddings are processed by a single Hidden Layer (24 nodes, ReLU activation, 0.2 drop out), concatenated with the visual input and then further processed by the network (2 Hidden Layers with 240 and 24 nodes, ReLU activation, 0.2 drop out). The classification decision is computed after this shared processing. The integration of both sources of information is therefore learned by the model. 4 Experiments 4.1 Set-up Data We use VisualGenome (Krishna et al., 2016), which contains annotations and bounding boxes for 3.8M objects in more than 100K images. Roughly 2.8M object attributes are annotated, the most frequent being colour descriptions. We extracted all objects with at least one attribute among the basic colour terms black, blue, brown, green, grey, orange, pink, purple, red, white, yellow. Objects with multiple names were split up into distinct entries, basic colour terms were removed from object names. To counter VisualGenome’s bias towards images of people (Krishna et al., 2016), we exclude objects with names that are hyponyms of person.3 We compile our train and test data so that colours are evenly distributed, as we do not want the model to rely on biases in colour frequency.4 For the development and evaluation sets, we use random under-sampling. To ensure training examples for less frequent object categories, 10K instances for each colour category are randomly 3Excluding e.g. “white person” as a case of a highly conventionalized colour. 4white has 290K, purple 10K instances in the data. picked from the original train set, with the possibility of objects being picked multiple times. In summary, 110k objects are used for training, 17523 for development and 9328 for evaluation. In Section 4.2, we report results for objects that occur at least 100 times in the data. For testing on unseen objects in Section 4.3, we use object types that occur at least 50 but less than 100 times with a colour attribute in VisualGenome (these are excluded from training). Training We train for 25 epochs using RMSprop as optimizer and a learning rate of 0.001. Evaluation We evaluate our models by measuring their accuracy both for the entire test set and for separate subsets of objects. In line with previous research in perceptual psychology (cf. Section 2), we distinguish Colour Diagnostic Objects (CDOs), that are strongly associated with a specific Memory Colour, and Colour Neutral Objects (CNOs), objects without a typical colour appearance. We expect the distinction between CDOs and CNOs to be reflected primarily in model predictions that involve the processing of conceptual object information. For CDOs, determining the respective Memory Colour could result in improved classification results, whereas this strategy is less promising for CNOs. Manually identifying objects as CDOs or CNOs is hardly feasible when using large-scale data sets such as VisualGenome. We therefore decide on a quantitative basis whether object types exhibit characteristic colours, namely by means of the entropy of the colour term distribution of an object type. For each object type o, we determine pc as the relative frequency of a colour c for all instances of the object. The entropy of an object’s colour distribution is then calculated as Eo = − X c∈C pc log2 pc 6539 CDO All CDO CBO CNO typ. atyp. Pixelwise 38.5 50.4 32.5 41.0 58.6 26.6 BOTTOM-UP 45.0 54.0 36.5 50.4 62.7 28.9 TOP-DOWN 33.7 72.6 26.6 19.7 96.6 2.6 LATE-FUSION 52.1 71.7 43.4 51.1 94.0 6.9 EARLY-FUSION 51.4 74.0 43.7 48.5 94.0 15.7 Table 1: Accuracy in colour prediction for all seen object types (left); broken down for CDOs, CNOs, CBOs (middle), and for typical and atypical colours of CDOs (right) where C is the set of basic colour terms. We use the 100 objects with the lowest entropy as CDOs, the 100 objects with the highest entropy as CNOs. In our data, objects such as tree, carrot, jeans and refrigerator are classified as CDOs. CNO examples include balloon, umbrella, fish and butterfly. We consider CDO instances whose colouring corresponds to their associated colour to be typical (e.g. objects annotated as green tree). Accordingly, CDOs that differ from their associated colour are considered atypical (e.g. red tree). Some objects can neither be clearly identified as CDOs nor as CNOs. These include objects such as stone that often occur with specific colours (e.g. grey) but also with other colours (e.g. brown, green). To cover such cases we include Colour Biased Objects (CBOs) as a third group, determined as the 100 object types whose entropy is closest to the median of the data set. Our test set contains a total of 1192 object instances categorized as CDOs (887 typical and 305 atypical) as well as 933 CBO and 1755 CNO instances. Pixelwise Baseline For comparison, we report results of Van de Weijer et al. (2007)’s model on our data, that computes colour words for objects by classifying the individual pixels in the respective bounding box. 4.2 Results Table 1 shows results for the separate model components and the fusion strategies. We note that BOTTOM-UP largely outperforms the pixelwise baseline. As expected, TOP-DOWN performs much worse than BOTTOM-UP on average, but achieves high accuracy on CDOs. We observe interesting differences between the fusion strategies: LATE-FUSION atypical typical Gold atypical 33 272 typical 53 834 EARLY-FUSION atypical typical Gold atypical 69 236 typical 53 834 Table 2: LATE-FUSION and EARLY-FUSION predictions for typical and atypical CDOs LATE-FUSION This model generally performs better than BOTTOM-UP and TOP-DOWN separately. Moreover, the impact of the respective component on the combined result depends on the type of object: For CDOs, the model seems to generally predict the memory colour for these diagnostic objects computed by TOP-DOWN. For CBOs, there is still a clear improvement over BOTTOMUP, whereas for CNOs the model mostly relies on BOTTOM-UP. Thus, even though the fusion is hard-coded, it achieves the desired flexible pattern for combining the components. However, LATEFUSION does not perform well at predicting atypical colours of CDOs, see the right columns of Table 1. This suggests that, here, the prediction of object colours is only based on knowledge and, essentially, not visually grounded. This is unsatisfactory as these atypical colours for CDOs could be particularly salient in conversation (Tarenskeen et al., 2015). EARLY-FUSION This fusion strategy generally improves the accuracy of BOTTOM-UP and TOPDOWN in isolation. On average, it slightly underperforms LATE-FUSION, but obtains slightly better accuracy values for CDOs and CBOs than LATE-FUSION (Table 1). Table 2 illustrates that EARLY-FUSION recognizes atypical object colours slightly more often than LATE-FUSION. At the same time, the model achieves higher accuracy for atypical CDOs, indicating that it often predicts the correct object colour in these cases. For typical CDO colours, LATE-FUSION and EARLY-FUSION achieve the same accuracy. Thus, EARLY-FUSION improves the prediction of atypical colours for CDOs as compared LATEFUSION (exemplified in figure 3). But it still predicts canonical object colours too often and achieves a lower accuracy on atypical CDO colours than BOTTOM-UP. This indicates that early link6540 BOTTOM-UP EARLY-FUSION object % top colour Acc. % top prediction Acc. % top prediction heater 94.12 (white) 0.0 35.29 (gray) 82.4 76.47 (white) tablet 42.86 (black) 19.0 42.86 (blue) 61.9 57.14 (black) wipers 94.12 (black) 35.3 29.41 (black) 70.6 76.47 (black) room 54.55 (white) 18.2 31.82 (gray) 50.0 36.36 (white) cherry 100.0 (red) 68.8 68.75 (red) 0.0 100.0 (green) lime 100.0 (green) 68.4 68.42 (green) 5.3 94.74 (yellow) dumpster 37.93 (green) 72.4 27.59 (blue) 10.3 75.86 (pink) plank 57.14 (brown) 66.7 33.33 (brown) 4.8 90.48 (gray) Table 3: Accuracy and top predicted colours for selected object types unseen during training. The top four objects obtain the highest improvements through early fusion, the bottom four objects decrease most with early fusion. Figure 3: TOP-DOWN and LATE-FUSION predict the canonical colour for the depicted bush (“green”). BOTTOM-UP and EARLY-FUSION capture the annotated colour (“purple”). age and joint processing improves the integration of visual and conceptual information, at least for CDOs. It is, however, not a perfect solution to all problems identified: Even though the model learns to merge both sources of information, the bias for canonical colours is still too strong, and there remains a high dependence on non-sensory data. 4.3 Unseen Object Types By using pre-trained embeddings, our models are able to handle object types that are unseen in the training set, via similarity to seen object types in the embedding space. For these objects, BOTTOMUP and EARLY-FUSION achieve an overall accuracy of 37.8 and 31.9, respectively5. To provide more qualitative insights, Table 3 shows the top four and bottom four objects in terms of how much EARLY-FUSION improves over the BOTTOM-UP baseline. With heater and wipers, the top four objects include types with highly characteristic 5Note that these figures are not directly comparable with the results described above, since the instances for the individual colours are not evenly distributed in this set. colours. EARLY-FUSION appears to correctly derive their object-specific colour tendencies from similarities to trained objects. In the lower four objects, all instances of cherry and lime share the same colour. Here, EARLY-FUSION also predominantly predicts a particular but incorrect colour, i.e. similarity in the off-the-shelf embedding space does not lead to good generalization for colour tendencies. This is particularly evident with lime: The prevailing prediction of yellow suggests that the (in this case, misleading) semantic similarity to the trained object type lemon is captured. These findings support previous work on multimodal distributional semantics showing that offthe-shelf embeddings do not necessarily capture similarity with respect to visual attributes of objects (Silberer and Lapata, 2014). 5 Discussion and Conclusion As in human perception, knowledge about typical object properties seems to be a valuable source of information for visual language grounding. Our fusion models clearly outperform a bottom-up baseline that relies solely on visual input. We also showed that the fusion architecture matters: the early integration of visual and conceptual information and their shared processing appears to be beneficial when colour diagnostic objects have atypical colours. However, even Early Fusion does not yet achieve a perfect balance between top-down and bottom-up processing. Future work should look at more complex fusion strategies, possibly coupled with bottom-up recalibration mechanisms (Zarrieß and Schlangen, 2016; Mojsilovic, 2005) to further enhance colour classification under difficult illumination conditions. Our experiment on objects unseen during training looks promising but can be extended towards a more general approach that interfaces colour prediction with object recognition. 6541 References Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6077–6086. Tadas Baltrusaitis, Chaitanya Ahuja, and LouisPhilippe Morency. 2019. Multimodal machine learning: A survey and taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2):423–443. Bert Baumgaertner, Raquel Fernandez, and Matthew Stone. 2012. Towards a flexible semantics: Colour terms in collaborative reference tasks. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics, pages 80–84, Montr´eal, Canada. Association for Computational Linguistics. Brent Berlin and Paul Kay. 1969. Basic Color Terms: Their Universality and Evolution. University of California Press, Berkeley and Los Angeles. Raffaella Bernardi, Ruket Cakici, Desmond Elliott, Aykut Erdem, Erkut Erdem, Nazli Ikizler-Cinbis, Frank Keller, Adrian Muscat, and Barbara Plank. 2016. Automatic description generation from images: A survey of models, datasets, and evaluation measures. Journal of Artificial Intelligence Research, 55:409–442. Andrew M. Colman. 2009. A dictionary of psychology, 3 edition. Oxford paperback reference. Oxford University Press, Oxford. Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Marc Aurelio Ranzato, and Tomas Mikolov. 2013. Devise: A deep visual-semantic embedding model. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 2121–2129. Curran Associates, Inc. Peter G¨ardenfors. 2004. Conceptual spaces: The Geometry of Thought. A Bradford book. MIT Press, Cambridge, Mass. [u.a]. Hatice Gunes and Massimo Piccardi. 2008. Automatic temporal segment detection and affect recognition from face and body display. IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics : a publication of the IEEE Systems, Man, and Cybernetics Society, 39:64–84. Tomohiro Ishizu. 2013. Disambiguation of ambiguous figures in the brain. Frontiers in Human Neuroscience, 7:501. Kushal Kafle and Christopher Kanan. 2017. Visual question answering: Datasets, algorithms, and future challenges. Computer Vision and Image Understanding, 163:3–20. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. 2016. Visual genome: Connecting language and vision using crowdsourced dense image annotations. Rony Kubat, Daniel Mirman, and Deb Roy. 2009. Semantic context effects on color categorization. In Proceedings of the 31st Annual Conference of the Cognitive Science Society, pages 491–495. Cognitive Science Society. Brian McMahan and Matthew Stone. 2015. A bayesian model of grounded color semantics. Transactions of the Association for Computational Linguistics, 3:103–115. Timothy Meo, Brian McMahan, and Matthew Stone. 2014. Generating and resolving vague color references. In Proceedings of the 18th Workshop Semantics and Pragmatics of Dialogue (SemDial), pages 107–115. Holger Mitterer, J¨orn M. Horschig, Jochen M¨usseler, and Asifa Majid. 2009. The influence of memory on perception: it’s not what things look like, it’s what you call them. Journal of experimental psychology. Learning, memory, and cognition, 35 6:1557–62. Holger Mitterer and Jan Peter de Ruiter. 2008. Recalibrating color categories using world knowledge. Psychological Science, 19(7):629–634. Aleksandra Mojsilovic. 2005. A computational model for color naming and describing color composition of images. IEEE Transactions on Image Processing, 14(5):690–699. Will Monroe, Noah D. Goodman, and Christopher Potts. 2016. Learning to generate compositional color descriptions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2243–2248, Austin, Texas. Association for Computational Linguistics. Will Monroe, Robert X.D. Hawkins, Noah D. Goodman, and Christopher Potts. 2017. Colors in context: A pragmatic neural model for grounded language understanding. Transactions of the Association for Computational Linguistics, 5:325–338. Maria Olkkonen, Thorsten Hansen, and Karl R. Gegenfurtner. 2008. Color appearance of familiar objects: Effects of object shape, texture, and illumination changes. Journal of Vision, 8(5):13–13. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Carina Silberer and Mirella Lapata. 2014. Learning grounded meaning representations with autoencoders. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 721–732. 6542 Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cees Snoek, Marcel Worring, and Arnold Smeulders. 2005. Early versus late fusion in semantic video analysis. pages 399–402. S.L. Tarenskeen, M. Broersma, and B. Geurts. 2015. ’hand me the yellow stapler’ or ’hand me the yellow dress’: Colour overspecification depends on object category. In Proceedings of the 19th Workshop on the Semantics and Pragmatics of Dialogue (SemDial), pages 140–148. Joost Van de Weijer, Cordelia Schmid, and Jakob Verbeek. 2007. Learning color names from real-world images. In CVPR 2007 - IEEE Conference on Computer Vision, pages 1–8. Olivia Winn and Smaranda Muresan. 2018. ‘lighter’ can still be dark: Modeling comparative color descriptions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 790–795, Melbourne, Australia. Association for Computational Linguistics. Christoph Witzel and Karl R. Gegenfurtner. 2018. Color perception: Objects, constancy, and categories. Annual Review of Vision Science, 4(1):475– 499. Yongqin Xian, Christoph H Lampert, Bernt Schiele, and Zeynep Akata. 2018. Zero-shot learning-a comprehensive evaluation of the good, the bad and the ugly. IEEE transactions on pattern analysis and machine intelligence. Sina Zarrieß and David Schlangen. 2016. Towards generating colour terms for referents in photographs: Prefer the expected or the unexpected? In Proceedings of the 9th International Natural Language Generation conference, pages 246–255, Edinburgh, UK. Association for Computational Linguistics.
2020
584
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6543–6554 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6543 Span-based Localizing Network for Natural Language Video Localization Hao Zhang1,2, Aixin Sun1, Wei Jing2,3, Joey Tianyi Zhou2,∗ 1School of Computer Science and Engineering, Nanyang Technological University, Singapore 2Institute of High Performance Computing, A*STAR, Singapore 3Institute for Infocomm Research, A*STAR, Singapore [email protected], [email protected] [email protected], joey [email protected] Abstract Given an untrimmed video and a text query, natural language video localization (NLVL) is to locate a matching span from the video that semantically corresponds to the query. Existing solutions formulate NLVL either as a ranking task and apply multimodal matching architecture, or as a regression task to directly regress the target video span. In this work, we address NLVL task with a span-based QA approach by treating the input video as text passage. We propose a video span localizing network (VSLNet), on top of the standard span-based QA framework, to address NLVL. The proposed VSLNet tackles the differences between NLVL and span-based QA through a simple and yet effective query-guided highlighting (QGH) strategy. The QGH guides VSLNet to search for matching video span within a highlighted region. Through extensive experiments on three benchmark datasets, we show that the proposed VSLNet outperforms the state-of-the-art methods; and adopting span-based QA framework is a promising direction to solve NLVL.1 1 Introduction Given an untrimmed video, natural language video localization (NLVL) is to retrieve or localize a temporal moment that semantically corresponds to a given language query. An example is shown in Figure 1. As an important vision-language understanding task, NLVL involves both computer vision and natural language processing techniques (Krishna et al., 2017; Hendricks et al., 2017; Gao et al., 2018; Le et al., 2019; Yu et al., 2019). Clearly, cross-modal reasoning is essential for NLVL to correctly locate the target moment from a video. Prior works primarily treat NLVL as a ranking task, which is solved by applying multimodal ∗Corresponding author. 1https://github.com/IsaacChanghau/VSLNet Language Query: Men are celebrating and an old man gives a trophy to a young boy. Timeline (second) 127.52 139.20 0.00 194.69 The Ground Truth Moment Figure 1: An illustration of localizing a temporal moment in an untrimmed video by a given language query. matching architecture to find the best matching video segment for a given language query (Gao et al., 2017; Hendricks et al., 2018; Liu et al., 2018a; Ge et al., 2019; Xu et al., 2019; Chen and Jiang, 2019; Zhang et al., 2019). Recently, some works explore to model cross-interactions between video and query, and to regress the temporal locations of target moment directly (Yuan et al., 2019b; Lu et al., 2019a). There are also studies to formulate NLVL as a sequence decision making problem and to solve it by reinforcement learning (Wang et al., 2019; He et al., 2019). We address the NLVL task from a different perspective. The essence of NLVL is to search for a video moment as the answer to a given language query from an untrimmed video. By treating the video as a text passage, and the target moment as the answer span, NLVL shares significant similarities with span-based question answering (QA) task. The span-based QA framework (Seo et al., 2017; Wang et al., 2017; Huang et al., 2018) can be adopted for NLVL. Hence, we attempt to solve this task with a multimodal span-based QA approach. There are two main differences between traditional text span-based QA and NLVL tasks. First, video is continuous and causal relations between video events are usually adjacent. Natural language, on the other hand, is inconsecutive and words in a sentence demonstrate syntactic structure. For instance, changes between adjacent video frames are usually very small, while adjacent word to6544 kens may carry distinctive meanings. As the result, many events in a video are directly correlated and can even cause one another (Krishna et al., 2017). Causalities between word spans or sentences are usually indirect and can be far apart. Second, compared to word spans in text, human is insensitive to small shifting between video frames. In other words, small offsets between video frames do not affect the understanding of video content, but the differences of a few words or even one word could change the meaning of a sentence. As a baseline, we first solve the NLVL task with a standard span-based QA framework named VSLBase. Specifically, visual features are analogous to that of text passage; the target moment is regarded as the answer span. VSLBase is trained to predict the start and end boundaries of the answer span. Note that VSLBase does not address the two aforementioned major differences between video and natural language. To this end, we propose an improved version named VSLNet (Video Span Localizing Network). VSLNet introduces a Query-Guided Highlighting (QGH) strategy in addition to VSLBase. Here, we regard the target moment and its adjacent contexts as foreground, while the rest as background, i.e., foreground covers a slightly longer span than the answer span. With QGH, VSLNet is guided to search for the target moment within a highlighted region. Through region highlighting, VSLNet well addresses the two differences. First, the longer region provides additional contexts for locating answer span due to the continuous nature of video content. Second, the highlighted region helps the network to focus on subtle differences between video frames, because the search space is reduced compared to the full video. Experimental results on three benchmark datasets show that adopting span-based QA framework is suitable for NLVL. With a simple network architecture, VSLBase delivers comparable performance to strong baselines. In addition, VSLNet further boosts the performance and achieves the best among all evaluated methods. 2 Related Work Natural Language Video Localization. The task of retrieving video segments using language queries was introduced in (Hendricks et al., 2017; Gao et al., 2017). Solutions to NLVL need to model the cross-interactions between natural language and video. The early works treat NLVL as a ranking task, and rely on multimodal matching architecture to find the best matching video moment for a language query (Gao et al., 2017; Hendricks et al., 2017, 2018; Wu and Han, 2018; Liu et al., 2018a,b; Xu et al., 2019; Zhang et al., 2019). Although intuitive, these models are sensitive to negative samples. Specifically, they need to dense sample candidate moments to achieve good performance, which leads to low efficiency and lack of flexibility. Various approaches have been proposed to overcome those drawbacks. Yuan et al. (2019b) builds a proposal-free method using BiLSTM and directly regresses temporal locations of target moment. Lu et al. (2019a) proposes a dense bottom-up framework, which regresses the distances to start and end boundaries for each frame in target moment, and select the ones with highest confidence as final result. Yuan et al. (2019a) proposes a semantic conditioned dynamic modulation for better correlating sentence related video contents over time, and establishing a precise matching relationship between sentence and video. There are also works (Wang et al., 2019; He et al., 2019) that formulate NLVL as a sequence decision making problem, and adopt reinforcement learning based approaches, to progressively observe candidate moments conditioned on language query. Most similar to our work are (Chen et al., 2019) and (Ghosh et al., 2019), as both studies are considered using the concept of question answering to address NLVL. However, both studies do not explain the similarity and differences between NLVL and traditional span-based QA, and they do not adopt the standard span-based QA framework. In our study, VSLBase adopts standard span-based QA framework; and VSLNet explicitly addresses the differences between NLVL and traditional spanbased QA tasks. Span-based Question Answering. Span-based QA has been widely studied in past years. Wang and Jiang (2017) combines match-LSTM (Wang and Jiang, 2016) and Pointer-Net (Vinyals et al., 2015) to estimate boundaries of the answer span. BiDAF (Seo et al., 2017) introduces bi-directional attention to obtain query-aware context representation. Xiong et al. (2017) proposes a coattention network to capture the interactions between context and query. R-Net (Wang et al., 2017) integrates mutual and self attentions into RNN encoder for feature refinement. QANet (Yu et al., 2018) lever6545 Query: This person starts cooking at the stove … 3D ConvNet This person stove … GloVe Feature Encoder … … Context-Query Attention Conditioned Span Predictor 𝑃! 𝑃" (a) VSLBase Feature Encoder Feature Encoder Context-Query Attention Conditioned Span Predictor 𝑃! 𝑃" QGH Feature Extractor (fixed during training) (b) VSLNet … … Feature Encoder Shared Shared 𝑉 𝑄 𝑽 𝑸 𝑽′ 𝑸′ FFN FFN &𝑽 &𝑸 𝑽! FFN FFN 𝑽 𝑸 𝑽′ 𝑸′ &𝑽 &𝑸 &𝑸 𝑽! &𝑽! Figure 2: An overview of the proposed architecture for NLVL. The feature extractor is fixed during training. Figure (a) depicts the adoption of standard span-based QA framework, i.e., VSLBase. Figure (b) shows the structure of VSLNet. ages a similar attention mechanism in a stacked convolutional encoder to improve performance. FusionNet (Huang et al., 2018) presents a full-aware multi-level attention to capture complete query information. By treating input video as text passage, the above frameworks are all applicable to NLVL in principle. However, these frameworks are not designed to consider the differences between video and text passage. Their modeling complexity arises from the interactions between query and text passage, both are text. In our solution, VSLBase adopts a simple and standard span-based QA framework, making it easier to model the differences between video and text through adding additional modules. Our VSLNet addresses the differences by introducing the QGH module. Very recently, pre-trained transformer based language models (Devlin et al., 2019; Dai et al., 2019; Liu et al., 2019; Yang et al., 2019) have elevated the performance of span-based QA tasks by a large margin. Meanwhile, similar pre-trained models (Sun et al., 2019a,b; Yu and Jiang, 2019; Rahman et al., 2019; Nguyen and Okatani, 2019; Lu et al., 2019b; Tan and Bansal, 2019) are being proposed to learn joint distributions over multimodality sequence of visual and linguistic inputs. Exploring the pre-trained models for NLVL is part of our future work and is out of the scope of this study. 3 Methodology We now describe how to address NLVL task by adopting a span-based QA framework. We then present VSLBase (Sections 3.2 to 3.4) and VSLNet in detail. Their architectures are shown in Figure 2. 3.1 Span-based QA for NLVL We denote the untrimmed video as V = {ft}T t=1 and the language query as Q = {qj}m j=1, where T and m are the number of frames and words, respectively. τ s and τ e represent the start and end time of the temporal moment i.e., answer span. To address NLVL with span-based QA framework, its data is transformed into a set of SQuAD style triples (Context, Question, Answer) (Rajpurkar et al., 2016). For each video V , we extract its visual features V = {vi}n i=1 by a pre-trained 3D ConvNet (Carreira and Zisserman, 2017), where n is the number of extracted features. Here, V can be regarded as the sequence of word embeddings for a text passage with n tokens. Similar to word embeddings, each feature vi here is a video feature vector. Since span-based QA aims to predict start and end boundaries of an answer span, the start/end time of a video sequence needs to be mapped to the corresponding boundaries in the visual feature sequence V. Suppose the video duration is T , the start (end) span index is calculated by as(e) = ⟨τ s(e)/T ×n⟩, where ⟨·⟩denotes the rounding operator. During the inference, the predicted span boundary can be easily converted to the corresponding time via τ s(e) = as(e)/n × T . After transforming moment annotations in NLVL dataset, we obtain a set of (V, Q, A) triples. Visual features V = [v1, v2, . . . , vn] act as the passage with n tokens; Q = [q1, q2, . . . , qm] is the query with m tokens, and the answer A = [vas, vas+1, . . . , vae] corresponds to a piece in the passage. Then, the NLVL task becomes to find the correct start and end boundaries of the answer span, as and ae. 3.2 Feature Encoder We already have visual features V = {vi}n i=1 ∈ Rn×dv. Word embeddings of a text query Q, Q = {qj}m j=1 ∈Rm×dq, are easily obtainable e.g., GloVe. We project them into the same dimension d, V′ ∈Rn×d and Q′ ∈Rm×d, by two linear 6546 layers (see Figure 2(a)). Then we build the feature encoder with a simplified version of the embedding encoder layer in QANet (Yu et al., 2018). Instead of applying a stack of multiple encoder blocks, we use only one encoder block. This encoder block consists of four convolution layers, followed by a multi-head attention layer (Vaswani et al., 2017). A feed-forward layer is used to produce the output. Layer normalization (Ba et al., 2016) and residual connection (He et al., 2016) are applied to each layer. The encoded visual features and word embeddings are as follows: eV = FeatureEncoder(V′) eQ = FeatureEncoder(Q′) (1) The parameters of feature encoder are shared by visual features and word embeddings. 3.3 Context-Query Attention After feature encoding, we use context-query attention (CQA) (Seo et al., 2017; Xiong et al., 2017; Yu et al., 2018) to capture the cross-modal interactions between visual and textural features. CQA first calculates the similarity scores, S ∈Rn×m, between each visual feature and query feature. Then context-to-query (A) and query-to-context (B) attention weights are computed as: A = Sr · eQ ∈Rn×d, B = Sr · ST c · eV ∈Rn×d where Sr and Sc are the row- and column-wise normalization of S by SoftMax, respectively. Finally, the output of context-query attention is written as: Vq = FFN [ eV; A; eV ⊙A; eV ⊙B]  (2) where Vq ∈Rn×d; FFN is a single feed-forward layer; ⊙denotes element-wise multiplication. 3.4 Conditioned Span Predictor We construct a conditioned span predictor by using two unidirectional LSTMs and two feed-forward layers, inspired by Ghosh et al. (2019). The main difference between ours and Ghosh et al. (2019) is that we use unidirectional LSTM instead of bidirectional LSTM. We observe that unidirectional LSTM shows similar performance with fewer parameters and higher efficiency. The two LSTMs are stacked so that the LSTM of end boundary can be conditioned on that of start boundary. Then the hidden states of the two LSTMs are fed into the Query: He uses the tool to take off all of the nuts one by one. …… …… Foreground 𝑎" 𝑎# 𝐿= 𝑎# −𝑎" 𝛼𝐿 𝛼𝐿 Background 0 1 Figure 3: An illustration of foreground and background of visual features. α is the ratio of foreground extension. corresponding feed-forward layers to compute the start and end scores: hs t = UniLSTMstart(vq t , hs t−1) he t = UniLSTMend(hs t, he t−1) Ss t = Ws × ([hs t; vq t ]) + bs Se t = We × ([he t; vq t ]) + be (3) Here, Ss t and Se t denote the scores of start and end boundaries at position t; vq t represents the t-th feature in Vq. Then, the probability distributions of start and end boundaries are computed by Ps = SoftMax(Ss) ∈Rn and Pe = SoftMax(Se) ∈Rn, and the training objective is defined as: Lspan = 1 2  fCE(Ps, Ys) + fCE(Pe, Ye)  (4) where fCE represents cross-entropy loss function; Ys and Ye are the labels for the start (as) and end (ae) boundaries, respectively. During inference, the predicted answer span (ˆas, ˆae) of a query is generated by maximizing the joint probability of start and end boundaries by: span(ˆas, ˆae) = arg max ˆas,ˆae Ps(ˆas)Pe(ˆae) s.t. 0 ≤ˆas ≤ˆae ≤n (5) We have completed the VSLBase architecture (see Figure 2(a)). VSLNet is built on top of VSLBase with QGH, to be detailed next. 3.5 Query-Guided Highlighting A Query-Guided Highlighting (QGH) strategy is introduced in VSLNet, to address the major differences between text span-based QA and NLVL tasks, as shown in Figure 2(b). With QGH strategy, we consider the target moment as the foreground, and the rest as background, illustrated in Figure 3. The target moment, which is aligned with the language query, starts from as and ends at ae with length L = ae −as. QGH extends the boundaries of the foreground to cover its antecedent and consequent 6547 … ✕ … Conv1D & Sigmoid ✕ 𝐡! 𝑆! "𝑽" $𝑽" concat Self-Attention 𝐯" # 𝐯$ # 𝐯% # 𝐡! 𝐡! 𝐡! #𝑞" #𝑞$ #𝑞& Figure 4: The structure of Query-Guided Highlighting. video contents, where the extension ratio is controlled by a hyperparameter α. As aforementioned in Introduction, the extended boundary could potentially cover additional contexts and also help the network to focus on subtle differences between video frames. By assigning 1 to foreground and 0 to background, we obtain a sequence of 0-1, denoted by Yh. QGH is a binary classification module to predict the confidence a visual feature belongs to foreground or background. The structure of QGH is shown in Figure 4. We first encode word features eQ into sentence representation (denoted by hQ), with self-attention mechanism (Bahdanau et al., 2015). Then hQ is concatenated with each feature in Vq as ¯Vq = [¯vq 1, . . . , ¯vq n], where ¯vq i = [vq i ; hQ]. The highlighting score is computed as: Sh = σ Conv1D( ¯Vq)  where σ denotes Sigmoid activation; Sh ∈Rn. The highlighted features are calculated by: eVq = Sh · ¯Vq (6) Accordingly, feature Vq in Equation 3 is replaced by eVq in VSLNet to compute Lspan. The loss function of query-guided highlighting is formulated as: LQGH = fCE(Sh, Yh) (7) VSLNet is trained in an end-to-end manner by minimizing the following loss: L = Lspan + LQGH. (8) 4 Experiments 4.1 Datasets We conduct experiments on three benchmark datasets: Charades-STA (Gao et al., 2017), ActivityNet Caption (Krishna et al., 2017), and TACoS (Regneri et al., 2013), summarized in Table 1. Charades-STA is prepared by Gao et al. (2017) based on Charades dataset (Sigurdsson et al., 2016). The videos are about daily indoor activities. There are 12, 408 and 3, 720 moment annotations for training and test, respectively. ActivityNet Caption contains about 20k videos taken from ActivityNet (Heilbron et al., 2015). We follow the setup in Yuan et al. (2019b), leading to 37, 421 moment annotations for training, and 17, 505 annotations for test. TACoS is selected from MPII Cooking Composite Activities dataset (Rohrbach et al., 2012). We follow the setting in Gao et al. (2017), where 10, 146, 4, 589 and 4, 083 annotations are used for training, validation and test, respectively. 4.2 Experimental Settings Metrics. We adopt “R@n, IoU = µ” and “mIoU” as the evaluation metrics, following (Gao et al., 2017; Liu et al., 2018a; Yuan et al., 2019b). The “R@n, IoU = µ” denotes the percentage of language queries having at least one result whose Intersection over Union (IoU) with ground truth is larger than µ in top-n retrieved moments. “mIoU” is the average IoU over all testing samples. In our experiments, we use n = 1 and µ ∈{0.3, 0.5, 0.7}. Implementation. For language query Q, we use 300d GloVe (Pennington et al., 2014) vectors to initialize each lowercase word; the word embeddings are fixed during training. For untrimmed video V , we downsample frames and extract RGB visual features using the 3D ConvNet which was pre-trained on Kinetics dataset (Carreira and Zisserman, 2017). We set the dimension of all the hidden layers in the model as 128; the kernel size of convolution layer is 7; the head size of multi-head attention is 8. For all datasets, the model is trained for 100 epochs with batch size of 16 and early stopping strategy. Parameter optimization is performed by Adam (Kingma and Ba, 2015) with learning rate of 0.0001, linear decay of learning rate and gradient clipping of 1.0. Dropout (Srivastava et al., 2014) of 0.2 is applied to prevent overfitting. 4.3 Comparison with State-of-the-Arts We compare VSLBase and VSLNet with the following state-of-the-arts: CTRL (Gao et al., 2017), ACRN (Liu et al., 2018a), TGN (Chen et al., 2018), ACL-K (Ge et al., 2019), QSPN (Xu et al., 2019), SAP (Chen and Jiang, 2019), MAN (Zhang et al., 2019), SM-RL (Wang et al., 2019), RWMRL (He et al., 2019), L-Net (Chen et al., 2019), ExCL (Ghosh et al., 2019), ABLR (Yuan et al., 6548 Dataset Domain # Videos (train/val/test) # Annotations Nvocab ¯Lvideo ¯Lquery ¯Lmoment ∆moment Charades-STA Indoors 5, 338/ −/1, 334 12, 408/ −/3, 720 1, 303 30.59s 7.22 8.22s 3.59s ActivityNet Cap Open 10, 009/ −/4, 917 37, 421/ −/17, 505 12, 460 117.61s 14.78 36.18s 40.18s TACoS Cooking 75/27/25 10, 146/4, 589/4, 083 2, 033 287.14s 10.05 5.45s 7.56s Table 1: Statistics of NLVL datasets, where Nvocab is vocabulary size of lowercase words, ¯Lvideo denotes average length of videos in seconds, ¯Lquery denotes average number of words in sentence query, ¯Lmoment is average length of temporal moments in seconds, and ∆moment is the standard deviation of temporal moment length in seconds. Model IoU = 0.3 IoU = 0.5 IoU = 0.7 mIoU C3D model without fine-tuning as visual feature extractor CTRL 23.63 8.89 ACL-K 30.48 12.20 QSPN 54.70 35.60 15.80 SAP 27.42 13.36 SM-RL 24.36 11.17 RWM-RL 36.70 MAN 46.53 22.72 DEBUG 54.95 37.39 17.69 36.34 VSLBase 61.72 40.97 24.14 42.11 VSLNet 64.30 47.31 30.19 45.15 C3D model with fine-tuning on Charades dataset ExCL 65.10 44.10 23.30 VSLBase 68.06 50.23 30.16 47.15 VSLNet 70.46 54.19 35.22 50.02 Table 2: Results (%) of “R@n, IoU = µ” and “mIoU” compared with the state-of-the-art on Charades-STA. Model IoU = 0.3 IoU = 0.5 IoU = 0.7 mIoU TGN 45.51 28.47 ABLR 55.67 36.79 36.99 RWM-RL 36.90 QSPN 45.30 27.70 13.60 ExCL∗ 63.00 43.60 24.10 DEBUG 55.91 39.72 39.51 VSLBase 58.18 39.52 23.21 40.56 VSLNet 63.16 43.22 26.16 43.19 Table 3: Results (%) of “R@n, IoU = µ” and “mIoU” compared with the state-of-the-art on ActivityNet Caption. 2019b) and DEBUG (Lu et al., 2019a). In all result tables, the scores of compared methods are reported in the corresponding works. Best results are in bold and second best underlined. The results on Charades-STA are summarized in Table 2. For fair comparison with ExCL, we follow the same setting in ExCL to use the C3D model fine-tuned on Charades dataset as visual feature extractor. Observed that VSLNet significantly outperforms all baselines by a large margin over all metrics. It is worth noting that the performance improvements of VSLNet are more significant under more strict metrics. For instance, VSLNet achieves 7.47% improvement in IoU = 0.7 versus Model IoU = 0.3 IoU = 0.5 IoU = 0.7 mIoU CTRL 18.32 13.30 TGN 21.77 18.90 ACRN 19.52 14.62 ABLR 19.50 9.40 13.40 ACL-K 24.17 20.01 L-Net 13.41 SAP 18.24 SM-RL 20.25 15.95 DEBUG 23.45 11.72 16.03 VSLBase 23.59 20.40 16.65 20.10 VSLNet 29.61 24.27 20.03 24.11 Table 4: Results (%) of “R@n, IoU = µ” and “mIoU” compared with the state-of-the-art on TACoS. Module IoU = 0.3 IoU = 0.5 IoU = 0.7 mIoU BiLSTM + CAT 61.18 43.04 26.42 42.83 CMF + CAT 63.49 44.87 27.07 44.01 BiLSTM + CQA 65.08 46.94 28.55 45.18 CMF + CQA 68.06 50.23 30.16 47.15 Table 5: Comparison between models with alternative modules in VSLBase on Charades-STA. 0.78% in IoU = 0.5, compared to MAN. Without query-guided highlighting, VSLBase outperforms all compared baselines over IoU = 0.7, which shows adopting span-based QA framework is promising for NLVL. Moreover, VSLNet benefits from visual feature fine-tuning, and achieves state-of-the-art results on this dataset. Table 3 summarizes the results on ActivityNet Caption dataset. Note that this dataset requires YouTube clips to be downloaded online. We have 1, 309 missing videos, while ExCL reports 3, 370 missing videos. Strictly speaking, the results reported in this table are not directly comparable. Despite that, VSLNet is superior to ExCL with 2.06% and 0.16% absolute improvements over IoU = 0.7 and IoU = 0.3, respectively. Meanwhile, VSLNet surpasses other baselines. Similar observations hold on TACoS dataset. Reported in Table 4, VSLNet achieves new state-ofthe-art performance over all evaluation metrics. Without QGH, VSLBase shows comparable per6549 Module CAT CQA ∆ BiLSTM 26.42 28.55 +2.13 CMF 27.07 30.16 +3.09 ∆ +0.65 +1.61 Table 6: Performance gains (%) of different modules over “R@1, IoU = 0.7” on Charades-STA. 𝑎"($𝑎") 𝑎& $𝑎& 𝑎&($𝑎&) 𝑎" $𝑎" person a is in a entryway eating a sandwich. a person awakens in their bedroom laying ona pillow. Figure 5: Similarity scores, S, between visual and language features in the context-query attention. as/ae denote the start/end boundaries of ground truth video moment, ˆas/ˆae denote the start/end boundaries of predicted target moment. formance with baselines. 4.4 Ablation Studies We conduct ablative experiments to analyze the importance of feature encoder and context-query attention in our approach. We also investigate the impact of extension ratio α (see Figure 3) in query-guided highlighting (QGH). Finally we visually show the effectiveness of QGH in VSLNet, and also discuss the weaknesses of VSLBase and VSLNet. 4.4.1 Module Analysis We study the effectiveness of our feature encoder and context-query attention (CQA) by replacing them with other modules. Specifically, we use bidirectional LSTM (BiLSTM) as an alternative feature encoder. For context-query attention, we replace it by a simple method (named CAT) which concatenates each visual feature with max-pooled query feature. Recall that our feature encoder consists of Convolution + Multi-head attention + Feed-forward layers (see Section 3.2), we name it CMF. With the alternatives, we now have 4 combinations, listed in Table 5. Observe from the results, CMF shows stable superiority over CAT on all metrics regardless of other modules; CQA surpasses CAT whichever feature encoder is used. This study indicates that CMF and CQA are more effective. Table 6 reports performance gains of different 0.0 0.05 0.1 0.2 0.3 0.4 0.5 1.0 1.5 2.0 3.0 Extension Ratio ( ) 67.0 67.5 68.0 68.5 69.0 69.5 70.0 R@1, IoU=0.3 VSLNet VSLBase (a) R@1, IoU = 0.3 0.0 0.05 0.1 0.2 0.3 0.4 0.5 1.0 1.5 2.0 3.0 Extension Ratio ( ) 50.0 50.5 51.0 51.5 52.0 52.5 53.0 53.5 R@1, IoU=0.5 VSLNet VSLBase (b) R@1, IoU = 0.5 0.0 0.05 0.1 0.2 0.3 0.4 0.5 1.0 1.5 2.0 3.0 Extension Ratio ( ) 30.0 30.5 31.0 31.5 32.0 32.5 33.0 33.5 34.0 34.5 R@1, IoU=0.7 VSLNet VSLBase (c) R@1, IoU = 0.7 0.0 0.05 0.1 0.2 0.3 0.4 0.5 1.0 1.5 2.0 3.0 Extension Ratio ( ) 47.0 47.5 48.0 48.5 49.0 49.5 mIoU VSLNet VSLBase (d) mIoU Figure 6: Analysis of the impact of extension ratio α in Query-Guided Highlighting on Charades-STA. 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Intersection over Union (IoU) 100 200 300 400 500 600 700 # of Samples VSLBase VSLNet (a) Charades-STA 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Intersection over Union (IoU) 500 1000 1500 2000 2500 3000 3500 # of Samples VSLBase VSLNet (b) ActivityNet Caption Figure 7: Histograms of the number of predicted results on test set under different IoUs, on two datasets. modules over “R@1, IoU = 0.7” metric. The results shows that replacing CAT with CQA leads to larger improvements, compared to replacing BiLSTM by CMF. This observation suggests CQA plays a more important role in our model. Specifically, keeping CQA, the absolute gain is 1.61% by replacing encoder module. Keeping CMF, the gain of replacing attention module is 3.09%. Figure 5 visualizes the matrix of similarity score between visual and language features in the contextquery attention (CQA) module (S ∈Rn×m in Section 3.3). This figure shows visual features are more relevant to the verbs and their objects in the query sentence. For example, the similarity scores between visual features and “eating” (or “sandwich”) are higher than that of other words. We believe that verbs and their objects are more likely to be used to describe video activities. Our observation is consistent with Ge et al. (2019), where verb-object pairs are extracted as semantic activity concepts. In contrast, these concepts are automatically captured by the CQA module in our method. 6550 Language Query: The person starts fixing her hair. Language Query: The person takes a sandwich from the refrigerator. 21.30s 11.40s Ground Truth 8.44s VSLBase VSLNet 11.42s 20.86s 26.97s 22.50s 17.20s Ground Truth 18.33s VSLBase VSLNet 17.44s 23.06s 25.23s (a) Two example cases on the Charades-STA dataset 97.73s 54.75s Ground Truth Language Query: He shows a water bottle he has along with a brush, and uses the brush to remove snow from the dash window of a car and the water to remove any excess snow left on the windshield. 60.62s VSLBase VSLNet 61.13s 20.86s Language Query: A lady talks with the men as they wait on the crane. 117.75s 51.24s 36.40s Ground Truth 24.45s VSLBase VSLNet 36.60s 51.38s 51.38s (b) Two example cases on the ActivityNet Caption dataset Figure 8: Visualization of predictions by VSLBase and VSLNet. Figures on the left depict the localized results by the two models. Figures on the right show probability distributions of start/end boundaries and highlighting scores. -20 -10 0 10 20 30 Moment Length Error (second) 0 100 200 300 400 500 600 700 800 900 1000 1100 # of Samples VSLBase VSLNet (a) Charades-STA -100 -50 0 50 100 150 200 Moment Length Error (second) 0 500 1000 1500 2000 2500 3000 3500 4000 # of Samples VSLBase VSLNet (b) ActivityNet Caption Figure 9: Plots of moment length errors in seconds between ground truths and results predicted by VSLBase and VSLNet, respectively. 4.4.2 The Impact of Extension Ratio in QGH We now study the impact of extension ratio α in query-guided highlighting module on CharadesSTA dataset. We evaluated 12 different values of α from 0.0 to ∞in experiments. 0.0 represents no answer span extension, and ∞means that the entire video is regarded as foreground. The results for various α’s are plotted in Figure 6. It shows that query-guided highlighting consistently contributes to performance improvements, regardless of α values, i.e., from 0 to ∞. Along with α raises, the performance of VSLNet first increases and then gradually decreases. The optimal performance appears between α = 0.05 and 0.2 over all metrics. Note that, when α = ∞, which is equivalent to no region is highlighted as a coarse region to locate target moment, VSLNet remains better than VSLBase. Shown in Figure 4, when α = ∞, QGH effectively becomes a straightforward concatenation of sentence representation with each of visual features. The resultant feature remains helpful for capturing semantic correlations between vision and language. In this sense, this function can be regarded as an approximation or simulation of the traditional multimodal matching strategy (Hendricks et al., 2017; Gao et al., 2017; Liu et al., 2018a). 4.4.3 Qualitative Analysis Figure 7 shows the histograms of predicted results on test sets of Charades-STA and ActivityNet Caption datasets. Results show that VSLNet beats VSLBase by having more samples in the high IoU ranges, e.g., IoU ≥0.7 on Charades-STA dataset. More predicted results of VSLNet are distributed in the high IoU ranges for ActivityNet Caption dataset. This result demonstrates the effectiveness 6551 !𝑎! 𝑎"(!𝑎") 𝑎! Language Query: The person turns off the light. 30.12s 46.40s 48.38s (a) A failure case on the Charades-STA dataset with IoU = 0.11. !𝑎! 𝑎! Language Query: After, the man grabs the girl’s arm, then the girl pushes the man over the wall. 56.81s 61.83s 60.86s 𝑎" !𝑎" 38.29s (b) A failure case on the ActivityNet Caption dataset with IoU = 0.17. Figure 10: Two failure examples predicted by VSLNet, as/ae denote the start/end boundaries of ground truth video moment, ˆas/ˆae denote the start/end boundaries of predicted target moment. of the query-guided highlighting (QGH) strategy. We show two examples in Figures 8(a) and 8(b) from Charades-STA and ActivityNet Caption datasets, respectively. From the two figures, the localized moments by VSLNet are closer to ground truth than that by VSLBase. Meanwhile, the start and end boundaries predicted by VSLNet are roughly constrained in the highlighted regions Sh, computed by QGH. We further study the error patterns of predicted moment lengths, as shown in Figure 9. The differences between moment lengths of ground truths and predicted results are measured. A positive length difference means the predicted moment is longer than the corresponding ground truth, while a negative means shorter. Figure 9 shows that VSLBase tends to predict longer moments, e.g., more samples with length error larger than 4 seconds in Charades-STA or 30 seconds in ActivityNet. On the contrary, constrained by QGH, VSLNet tends to predict shorter moments, e.g., more samples with length error smaller that −4 seconds in Charades-STA or −20 seconds in ActivityNet Caption. This observation is helpful for future research on adopting span-based QA framework for NLVL. In addition, we also exam failure cases (with IoU predicted by VSLNet lower than 0.2) shown in Figure 10. In the first case, as illustrated by Figure 10(a), we observe an action that a person turns towards to the lamp and places an item there. The QGH falsely predicts the action as the beginning of the moment ”turns off the light”. The second failure case involves multiple actions in a query, as shown in Figure 10(b). QGH successfully highlights the correct region by capturing the temporal information of two different action descriptions in the given query. However, it assigns “pushes” with higher confidence score than “grabs”. Thus, VSLNet only captures the region corresponding to the “pushes” action, due to its confidence score. 5 Conclusion By considering a video as a text passage, we solve the NLVL task with a multimodal span-based QA framework. Through experiments, we show that adopting a standard span-based QA framework, VSLBase, effectively addresses NLVL problem. However, there are two major differences between video and text. We further propose VSLNet, which introduces a simple and effective strategy named query-guided highlighting, on top of VSLBase. With QGH, VSLNet is guided to search for answers within a predicted coarse region. The effectiveness of VSLNet (and even VSLBase) suggest that it is promising to explore span-based QA framework to address NLVL problems. Acknowledgments This research is supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project #A18A1b0045 and #A18A2b0046). 6552 References Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations. Jo˜ao Carreira and Andrew Zisserman. 2017. Quo vadis, action recognition? a new model and the kinetics dataset. In IEEE Conference on Computer Vision and Pattern Recognition, pages 4724–4733. Jingyuan Chen, Xinpeng Chen, Lin Ma, Zequn Jie, and Tat-Seng Chua. 2018. Temporally grounding natural sentence in video. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 162–171. Association for Computational Linguistics. Jingyuan Chen, Lin Ma, Xinpeng Chen, Zequn Jie, and Jiebo Luo. 2019. Localizing natural language in videos. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 8175–8182. Shaoxiang Chen and Yu-Gang Jiang. 2019. Semantic proposal for activity localization in videos via sentence query. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 8199– 8206. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978–2988. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171–4186. Association for Computational Linguistics. Jiyang Gao, Runzhou Ge, Kan Chen, and Ramakant Nevatia. 2018. Motion-appearance co-memory networks for video question answering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6576–6585. Jiyang Gao, Chen Sun, Zhenheng Yang, and Ramakant Nevatia. 2017. Tall: Temporal activity localization via language query. In IEEE International Conference on Computer Vision, pages 5277–5285. Runzhou Ge, Jiyang Gao, Kan Chen, and Ram Nevatia. 2019. Mac: Mining activity concepts for languagebased temporal localization. In IEEE Winter Conference on Applications of Computer Vision, pages 245–253. Soham Ghosh, Anuva Agarwal, Zarana Parekh, and Alexander Hauptmann. 2019. ExCL: Extractive Clip Localization Using Natural Language Descriptions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1984–1990. Association for Computational Linguistics. Dongliang He, Xiang Zhao, Jizhou Huang, Fu Li, Xiao Liu, and Shilei Wen. 2019. Read, watch, and move: Reinforcement learning for temporally grounding natural language descriptions in videos. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 8393–8400. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. F. C. Heilbron, V. Escorcia, B. Ghanem, and J. C. Niebles. 2015. Activitynet: A large-scale video benchmark for human activity understanding. In IEEE Conference on Computer Vision and Pattern Recognition, pages 961–970. Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2018. Localizing moments in video with temporal language. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1380–1390. Association for Computational Linguistics. Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan C. Russell. 2017. Localizing moments in video with natural language. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 5804–5813. Hsin-Yuan Huang, Chenguang Zhu, Yelong Shen, and Weizhu Chen. 2018. Fusionnet: Fusing via fullyaware attention with application to machine comprehension. In International Conference on Learning Representations. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations. R. Krishna, K. Hata, F. Ren, L. Fei-Fei, and J. C. Niebles. 2017. Dense-captioning events in videos. In IEEE International Conference on Computer Vision, pages 706–715. Hung Le, Doyen Sahoo, Nancy Chen, and Steven Hoi. 2019. Multimodal transformer networks for endto-end video-grounded dialogue systems. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5612– 5623. Association for Computational Linguistics. 6553 Meng Liu, Xiang Wang, Liqiang Nie, Xiangnan He, Baoquan Chen, and Tat-Seng Chua. 2018a. Attentive moment retrieval in videos. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 15–24. Association for Computing Machinery. Meng Liu, Xiang Wang, Liqiang Nie, Qi Tian, Baoquan Chen, and Tat-Seng Chua. 2018b. Crossmodal moment localization in videos. In Proceedings of the 26th ACM International Conference on Multimedia, pages 843–851. Association for Computing Machinery. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Chujie Lu, Long Chen, Chilie Tan, Xiaolin Li, and Jun Xiao. 2019a. DEBUG: A dense bottom-up grounding approach for natural language video localization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 5147–5156. Association for Computational Linguistics. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019b. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, pages 13–23. Curran Associates, Inc. Duy-Kien Nguyen and Takayuki Okatani. 2019. Multitask learning of hierarchical vision-language representation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10492–10501. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543. Association for Computational Linguistics. Wasifur Rahman, Md Kamrul Hasan, Amir Zadeh, Louis-Philippe Morency, and Mohammed Ehsan Hoque. 2019. M-bert: Injecting multimodal information in the bert structure. arXiv preprint arXiv:1908.05787. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Association for Computational Linguistics. Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. 2013. Grounding action descriptions in videos. Transactions of the Association for Computational Linguistics, 1:25–36. Marcus Rohrbach, Michaela Regneri, Mykhaylo Andriluka, Sikandar Amin, Manfred Pinkal, and Bernt Schiele. 2012. Script data for attribute-based recognition of composite activities. In Proceedings of the 12th European Conference on Computer Vision. Springer Berlin Heidelberg. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In International Conference on Learning Representations. Gunnar A. Sigurdsson, G¨ul Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. 2016. Hollywood in homes: Crowdsourcing data collection for activity understanding. In European Conference on Computer Vision (ECCV), pages 510–526. Springer International Publishing. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(56):1929–1958. Chen Sun, Fabien Baradel, Kevin Murphy, and Cordelia Schmid. 2019a. Contrastive bidirectional transformer for temporal representation learning. arXiv preprint arXiv:1906.05743. Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. 2019b. Videobert: A joint model for video and language representation learning. In 2019 IEEE International Conference on Computer Vision (ICCV). Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5099–5110. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Curran Associates, Inc. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, pages 2692–2700. Curran Associates, Inc. Shuohang Wang and Jing Jiang. 2016. Learning natural language inference with LSTM. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1442–1451. Association for Computational Linguistics. 6554 Shuohang Wang and Jing Jiang. 2017. Machine comprehension using match-lstm and answer pointer. In International Conference on Learning Representations. Weining Wang, Yan Huang, and Liang Wang. 2019. Language-driven temporal activity localization: A semantic matching reinforcement learning model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 334– 343. Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 189–198. Association for Computational Linguistics. Aming Wu and Yahong Han. 2018. Multi-modal circulant fusion for video-to-language and backward. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI18, pages 1029–1035. International Joint Conferences on Artificial Intelligence Organization. Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. In International Conference on Learning Representations. Huijuan Xu, Kun He, Bryan A. Plummer, L. Sigal, Stan Sclaroff, and Kate Saenko. 2019. Multilevel language and vision integration for text-to-clip retrieval. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 9062–9069. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32, pages 5754–5764. Curran Associates, Inc. Adams Wei Yu, David Dohan, Quoc Le, Thang Luong, Rui Zhao, and Kai Chen. 2018. Fast and accurate reading comprehension by combining self-attention and convolution. In International Conference on Learning Representations. Jianfei Yu and Jing Jiang. 2019. Adapting bert for target-oriented multimodal sentiment classification. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI19, pages 5408–5414. International Joint Conferences on Artificial Intelligence Organization. Zhou Yu, Dejing Xu, Jun Yu, Ting Yu, Zhou Zhao, Yueting Zhuang, and Dacheng Tao. 2019. Activitynet-qa: A dataset for understanding complex web videos via question answering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 9127–9134. Yitian Yuan, Lin Ma, Jingwen Wang, Wei Liu, and Wenwu Zhu. 2019a. Semantic conditioned dynamic modulation for temporal sentence grounding in videos. In Advances in Neural Information Processing Systems, pages 536–546. Curran Associates, Inc. Yitian Yuan, Tao Mei, and Wenwu Zhu. 2019b. To find where you talk: Temporal sentence localization in video with attention based location regression. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 9159–9166. Da Zhang, Xiyang Dai, Xin Wang, Yuan-Fang Wang, and Larry S Davis. 2019. Man: Moment alignment network for natural language moment retrieval via iterative graph adjustment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1247–1257.
2020
585
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6555–6565 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6555 Words Aren’t Enough, Their Order Matters: On the Robustness of Grounding Visual Referring Expressions Arjun R. Akula1∗, Spandana Gella2, Yaser Al-Onaizan2, Song-Chun Zhu1, Siva Reddy3 1UCLA Center for Vision, Cognition, Learning, and Autonomy, 2Amazon AI 3Facebook CIFAR AI Chair, Mila; McGill University [email protected], [email protected], [email protected], [email protected], [email protected] Abstract Visual referring expression recognition is a challenging task that requires natural language understanding in the context of an image. We critically examine RefCOCOg, a standard benchmark for this task, using a human study and show that 83.7% of test instances do not require reasoning on linguistic structure, i.e., words are enough to identify the target object, the word order doesn’t matter. To measure the true progress of existing models, we split the test set into two sets, one which requires reasoning on linguistic structure and the other which doesn’t. Additionally, we create an out-of-distribution dataset Ref-Adv by asking crowdworkers to perturb in-domain examples such that the target object changes. Using these datasets, we empirically show that existing methods fail to exploit linguistic structure and are 12% to 23% lower in performance than the established progress for this task. We also propose two methods, one based on contrastive learning and the other based on multi-task learning, to increase the robustness of ViLBERT, the current state-ofthe-art model for this task. Our datasets are publicly available at https://github.com/ aws/aws-refcocog-adv. 1 Introduction Visual referring expression recognition is the task of identifying the object in an image referred by a natural language expression (Kazemzadeh et al., 2014; Nagaraja et al., 2016; Mao et al., 2016; Hu et al., 2016). Figure 1 shows an example. This task has drawn much attention due to its ability to test a model’s understanding of natural language in the context of visual grounding and its application in downstream tasks such as image retrieval (Young et al., 2014) and question answering (Antol et al., 2015; Zhu et al., 2016). To track ∗Work done in part while AA was intern at Amazon AI. Model pastry on the plate next to a blue fork a blue fork next to the pastry plate r1 r2 r1 r1 Original Expression Adversarial Modification Input Image Region Proposals (r1, r2) Model Figure 1: An example of the visual referring expression recognition task. If the word pastry is present in the referring expression, models prefer the bounding box r1 (highlighted in green) irrespective of the change in linguistic structure (word order). progress on this task, various datasets have been proposed, in which real world images are annotated by crowdsourced workers (Kazemzadeh et al., 2014; Mao et al., 2016). Recently, neural models have achieved tremendous progress on these datasets (Yu et al., 2018; Lu et al., 2019). However, multiple studies have suggested that these models could be exploiting strong biases in these datasets (Cirik et al., 2018b; Liu et al., 2019). For example, models could be just selecting a salient object in an image or a referring expression without recourse to linguistic structure (see Figure 1). This defeats the true purpose of the task casting doubts on the actual progress. In this work, we examine RefCOCOg dataset (Mao et al., 2016), a popular testbed for evaluating referring expression models, using crowdsourced workers. We show that a large percentage of samples in the RefCOCOg test set indeed do not rely on linguistic structure (word order) of the expressions. Accordingly, we split RefCOCOg test set into two splits, Ref-Easy and Ref-Hard, where linguistic structure is key for recognition in the latter but not the former (§2). In addition, we create a new out-of-distribution1 dataset called Ref-Adv using Ref-Hard by rewriting a referring expression 1This is a contrast set according to Gardner et al. (2020) 6556 such that the target object is different from the original annotation (§3). We evaluate existing models on these splits and show that the true progress is at least 12-23% behind the established progress, indicating there is ample room for improvement (§4). We propose two new models, one which make use of contrastive learning using negative examples, and the other based on multi-task learning, and show that these are slightly more robust than the current state-of-the-art models (§5). 2 Importance of linguistic structure RefCOCOg is the largest visual referring expression benchmark available for real world images (Mao et al., 2016). Unlike other referring expression datasets such as RefCOCO and RefCOCO+ (Kazemzadeh et al., 2014), a special care has been taken such that expressions are longer and diverse. We therefore choose to examine the importance of linguistic structure in RefCOCOg. Cirik et al. (2018b) observed that when the words in a referring expression are shuffled in random order, the performance of existing models on RefCOCOg drops only a little. This suggests that models are relying heavily on the biases in the data than on linguistic structure, i.e., the actual sequence of words. Ideally, we want to test models on samples where there is correlation between linguistic structure and spatial relations of objects, and any obscurity in the structure should lead to ambiguity. To filter out such set, we use humans. We randomly shuffle words in a referring expression to distort its linguistic structure, and ask humans to identify the target object of interest via predefined bounding boxes. Each image in RefCOCOg test set is annotated by five Amazon Mechanical Turk (AMT) workers and when at least three annotators select a bounding box that has high overlap with the ground truth, we treat it as a correct prediction. Following Mao et al. (2016), we set 0.5 IoU (intersection over union) as the threshold for high overlap. Given that there are at least two objects in each image, the optimal performance of a random choice is less than 50%.2 However, we observe that human accuracy on distorted examples is 83.7%, indicating that a large portion of RefCOCOg test set is insensitive to linguistic structure. Based on this observation, we divide the test set into two splits for fine-grained evaluation of models: Ref-Easy contains samples insensitive 2On average, there are 8.2 bounding boxes per image. Ref-Easy Ref-Hard Ref-Adv data size 8034 (83.7% of RefCOCOg) 1568 (16.3% of RefCOCOg) 3704 avg. length in words 8.0 10.2 11.4 Table 1: Statistics of Ref-Easy, Ref-Hard and Ref-Adv. Ref-Easy and Ref-Hard indicate the proportion of samples in RefCOCOg test set that are insensitive and sensitive to linguistic structure respectively. to linguistic structure and Ref-Hard contains sensitive samples (statistics of the splits are shown in Table 1). 3 An out-of-distribution dataset Due to unintended annotation artifacts in RefCOCOg, it is still possible that models could perform well on Ref-Hard without having to rely on linguistic structure, e.g., by selecting frequent objects seen during training time. Essentially, RefHard is an in-distribution split. To avoid this, we create Ref-Adv, an adversarial test set with samples that may be fall out of training distribution. We take each sample in Ref-Hard and collect additional referring expressions such that the target object is different from the original object. We chose the target objects which humans are most confused with when the referring expression is shuffled (as described in the previous section). For each target object, we ask three AMT workers to write a referring expression while retaining most content words in the original referring expression. In contrast to the original expression, the modified expression mainly differs in terms of the structure while sharing several words. For example, in Figure 1, the adversarial sample is created by swapping pastry and blue fork and making plate as the head of pastry. We perform an extra validation step to filter out bad referring expressions. In this step, three additional AMT workers select a bounding box to identify the target object, and we only select the samples where at least two workers achieve IoU > 0.5 with the target object. Since the samples in Ref-Adv mainly differ in linguistic structure with respect to Ref-Hard, we hope that a model which does not make use of linguistic structure (and correspondingly spatial relations between objects) performs worse on RefAdv even when it performs well on Ref-Hard due to exploiting biases in the training data. Figure 2 shows several examples from the RefEasy, Ref-Hard, and Ref-Adv splits. We note that 6557 Easy: A dinning table with cake and drinks Hard: A chair with a purse hanging from it Adv: The purse which is hanging from a chair Easy: Bus Hard: Bus in the middle of the crowd Adv: The crowd that the bus is in the middle of Easy: The larger of two giraffes Hard: A giraffe eating leaves off the tree Adv: The giraffe that is not eating leaves off the tree Easy: A blue snowboard Hard: A woman wearing a blue jacket and orange glasses next to a woman with a white hood Adv: A woman with a white hood, next to a woman wearing orange glasses and a blue jacket. Easy: Water in a tall, clear glass Hard: The glass of water next to the saucer with the cup on it Adv: The cup on the saucer, next to the glass of water Easy: The short blue bike on the right Hard: The blue bike behind the red car Adv: The red car behind the blue bike Easy: The man with the glasses on Hard: A man holding a cake that is not wearing a tie Adv: The man holding a cake that is wearing a tie Easy: A green cushion couch with a pillow Hard: A green couch across from a white couch Adv: A white couch across from a green couch Figure 2: Examples from Ref-Easy, Ref-Hard, and Ref-Adv splits. As seen, Ref-Hard and Ref-Adv have several words in common but differ in their linguistic structure and the target object of interest. Ref-Adv expressions are longer on average than Ref-Easy and Ref-Hard (Figure 6 in appendix) and consists of rich and diverse spatial relationships (Figure 7 in appendix). Concurrent to our work, Gardner et al. (2020) also propose perturbed test splits for several tasks by modifying in-domain examples. In their setup, the original authors of each task create perturbed examples, whereas we use crowdworkers. Closest to our work is from Kaushik et al. (2020) who also use crowdworkers. While we use perturbed examples to evaluate robustness, they also use them to improve robustness (we propose complementary methods to improve robustness §5). Moreover, we are primarily concerned with the robustness of models for visual expression recognition task, while Gardner et al. and Kaushik et al. focus on different tasks (e.g., sentiment, natural language inference). 3.1 Human Performance on Ref-Easy, Ref-Hard and Ref-Adv We conducted an additional human study (on AMT) to compare the human performance on Ref-Easy, Ref-Hard and Ref-Adv splits. First, we randomly sampled 100 referring expressions from each of the three splits. Each referring expression is then assigned to three AMT workers and are asked to select a bounding box to identify the target object. We considered a sample to be correctly annotated by humans if at least two out of three workers select the ground-truth annotation. Through this evaluation, we obtained human performance on each of the three splits Ref-Easy, Ref-Hard, and Ref-Adv as 98%, 95%, and 96% respectively. 4 Diagnosing Referring Expression Recognition models We evaluate the following models, most of which are designed to exploit linguistic structure. CMN (Compositional Modular Networks; Hu et al. 2017; Andreas et al. 2016) grounds expressions using neural modules by decomposing an expression into <subject, relation, object> triples. The subject and object are localized to the objects in the image using a localization module while the relation between them is modeled using a relationship module. The full network learns to jointly decompose the input expression into a triple while also recognizing the target object. GroundNet (Cirik et al., 2018a) is similar to CMN, however it makes use of rich linguistic structure (and correspondingly rich modules) as defined by an external syntactic parser. MattNet (Yu et al., 2018) generalizes CMN to flexibly adapt to expressions that cannot be captured by the fixed template of CMN. It introduces new modules and also uses an attention mechanism to weigh modules. ViLBERT (Lu et al., 2019), the state-of-the-art model for referring expression recognition, uses a Figure 2: Examples from Ref-Easy, Ref-Hard, and Ref-Adv splits. As seen, Ref-Hard and Ref-Adv have several words in common but differ in their linguistic structure and the target object of interest. Ref-Adv expressions are longer on average than Ref-Easy and Ref-Hard (Figure 6 in appendix) and consists of rich and diverse spatial relationships (Figure 7 in appendix). Concurrent to our work, Gardner et al. (2020) also propose perturbed test splits for several tasks by modifying in-domain examples. In their setup, the original authors of each task create perturbed examples, whereas we use crowdworkers. Closest to our work is from Kaushik et al. (2020) who also use crowdworkers. While we use perturbed examples to evaluate robustness, they also use them to improve robustness (we propose complementary methods to improve robustness §5). Moreover, we are primarily concerned with the robustness of models for visual expression recognition task, while Gardner et al. and Kaushik et al. focus on different tasks (e.g., sentiment, natural language inference). 3.1 Human Performance on Ref-Easy, Ref-Hard and Ref-Adv We conducted an additional human study (on AMT) to compare the human performance on Ref-Easy, Ref-Hard and Ref-Adv splits. First, we randomly sampled 100 referring expressions from each of the three splits. Each referring expression is then assigned to three AMT workers and are asked to select a bounding box to identify the target object. We considered a sample to be correctly annotated by humans if at least two out of three workers select the ground-truth annotation. Through this evaluation, we obtained human performance on each of the three splits Ref-Easy, Ref-Hard, and Ref-Adv as 98%, 95%, and 96% respectively. 4 Diagnosing Referring Expression Recognition models We evaluate the following models, most of which are designed to exploit linguistic structure. CMN (Compositional Modular Networks; Hu et al. 2017; Andreas et al. 2016) grounds expressions using neural modules by decomposing an expression into <subject, relation, object> triples. The subject and object are localized to the objects in the image using a localization module while the relation between them is modeled using a relationship module. The full network learns to jointly decompose the input expression into a triple while also recognizing the target object. GroundNet (Cirik et al., 2018a) is similar to CMN, however it makes use of rich linguistic structure (and correspondingly rich modules) as defined by an external syntactic parser. MattNet (Yu et al., 2018) generalizes CMN to flexibly adapt to expressions that cannot be captured by the fixed template of CMN. It introduces new modules and also uses an attention mechanism to weigh modules. ViLBERT (Lu et al., 2019), the state-of-the-art model for referring expression recognition, uses a 6558 Co-TRM TRM Co-TRM TRM Shared ViLBERT Layers Input image Input question/ Referring expression Task-Specific Layers Figure 3: Multi-task learning model for referring expression recognition with GQA pretrain-then-transfer learning approach to jointly learn visiolinguistic representations from largescale data and utilizes them to ground expressions. This is the only model that does not explicitly model compositional structure of language, but BERT-like models are shown to capture syntactic structure latently (Hewitt and Manning, 2019). 4.1 Results and discussion We trained on the full training set of RefCOCOg and performed hyperparameter tuning on a development set. We used the development and test splits of Mao et al. (2016). Table 2 shows the model accuracies on these splits and our proposed datasets. The models are trained to select ground truth bounding box from a set of predefined bounding boxes. We treat a prediction as positive if the predicted bounding box has IoU > 0.5 with the ground truth. Although the overall performance on the test set seem high, in reality, models excel only at RefEasy while performing poorly on Ref-Hard. The difference in performance between Ref-Easy and Ref-Hard ranges up to 15%. This indicates that current models do not exploit linguistic structure effectively. When tested on Ref-Adv, the performance goes down even further, increasing the gap between Ref-Easy and Ref-Adv (up to 26%). This suggests that models are relying on reasoning shortcuts found in training than actual understanding. Among the models, GroundNet performs worse, perhaps due to its reliance on rigid structure predicted by an external parser and the mismatches between the predicted structure and spatial relations between objects. ViLBERT achieves the highest performance and is relatively more robust than other models. In the next section, we propose methods to further increase the robustness of ViLBERT. Model Dev Test Easy Hard Adv GroundNet 66.50 65.80 67.11 54.47 42.90 CMN 70.00 69.40 69.55 68.63 49.50 MattNet 79.21 78.51 80.96 65.94 54.64 ViLBERT 83.39 83.63 85.93 72.00 70.90 Table 2: Accuracy of models on RefCOCOg standard splits and our splits Ref-Easy, Ref-Hard and Ref-Adv. 5 Increasing the robustness of ViLBERT We extend ViLBERT in two ways, one based on contrastive learning using negative samples, and the other based on multi-task learning on GQA (Hudson and Manning, 2019), a task that requires linguistic and spatial reasoning on images. Contrastive learning using negative samples Instead of learning from one single example, contrastive learning aims to learn from multiple examples by comparing one to the other. In order to increase the sensitivity to linguistic structure, we mine negative examples that are close to the current example and learn to jointly minimize the loss on the current (positive) example and maximize the loss on negative examples. We treat the triplets i, e, b  in the training set as positive examples, where i, e, b stands for image, expression and ground truth bounding box. For each triplet i, e, b  , we sample another training example i′, e′, b′ , and use it to create two negative samples, defined by i′, e, b′ and i, e′, b  , i.e., we pair wrong bounding boxes with wrong expressions. For efficiency, we only consider negative pairs from the mini-batch. We modify the batch loss function as follows: L i, e, b  =F(e,e′)  ℓ i, e, b  −ℓ i, e′, b  −τ  + +F(i,i′)  ℓ i, e, b  −ℓ i′, e, b′ −τ  + 6559 Model Dev Test Easy Hard Adv ViLBERT (VB) 83.39 83.63 85.93 72.00 70.90 VB+Sum-H 81.61 83.00 85.93 70.60 72.30 VB+Max-H 82.93 82.70 86.58 70.46 73.35 VB+MTL (GQA) 83.45 84.30 86.23 73.79 73.92 Table 3: Accuracy of enhanced ViLBERT models. Here ℓ(i, e, b) is the cross-entropy loss of ViLBERT, [x]+ is the hinge loss defined by max 0, x  , and τ is the margin parameter. F indicates a function over all batch samples. We define F to be either sum of hinges (Sum-H) or max of hinges (Max-H). While Sum-H takes sum over all negative samples, If batch size is n, for each i, e, b  , there will be n−1 triplets of i′, e, b′ and i, e′, b  . For i, e, b  , there will be one i′, e, b′ and one i, e′, b  . Similar proposals are known to increase the robustness of vision and language problems like visual-semantic embeddings and image description ranking (Kiros et al., 2014; Gella et al., 2017; Faghri et al., 2018). Multi-task Learning (MTL) with GQA In order to increase the sensitivity to linguistic structure, we rely on tasks that require reasoning on linguistic structure and learn to perform them alongside our task. We employ MTL with GQA (Hudson and Manning, 2019), a compositional visual question answering dataset. Specifically, we use the GQA-Rel split which contains questions that require reasoning on both linguistic structure and spatial relations (e.g., Is there a boy wearing a red hat standing next to yellow bus? as opposed to Is there a boy wearing hat?). Figure 3 depicts the neural architecture. We share several layers between the tasks to enable the model to learn representations useful for both tasks. Each shared layer constitute a co-attention transformer block (Co-TRM; Lu et al. 2019) and a transformer block (TRM; Vaswani et al. 2017). While in a transformer, attention is computed using queries and keys from the same modality, in a co-attention transformer they come from different modalities (see cross arrows in Figure 3). The shared representations are eventually passed as input to task-specific MLPs. We optimize each task using alternative training (Luong et al., 2015). Results and discussion Table 3 shows the experimental results on the referring expression recognition task. Although contrastive learning improves e1: The ladder that is raised the tallest e2: A wooden boat carries 5 boys with skis e1’: The ladder in front of the raised ladder e2’: A pair of skis in the boat ViLBERT MTL GT Figure 4: Predictions of ViLBERT and MTL model (GT denotes ground-truth). e1′ and e2′ are adversarial expressions of e1 and e2 respectively. the robustness of ViLBERT on Ref-Adv (+1.4% and +2.5% for Sum-H and Max-H respectively), it comes at a cost of slight performance drop on the full test (likely due to sacrificing biases shared between training and test sets). Whereas MTL improves the robustness on all sets showing that multitask learning helps (we observe 2.3% increase on GQA §A.5.2). Moreover, the performance of MTL on Ref-Hard and Ref-Adv are similar, suggesting that the model generalizes to unseen data distribution. Figure 4 shows qualitative examples comparing MTL predictions on Ref-Hard and Ref-Adv parallel examples. These suggest that the MTL model is sensitive to linguistic structure. However, there is still ample room for improvement indicated by the gap between Ref-Easy and Ref-Hard (12.4%). 6 Conclusion Our work shows that current datasets and models for visual referring expressions fail to make effective use of linguistic structure. Although our proposed models are slightly more robust than existing models, there is still significant scope for improvement. We hope that Ref-Hard and Ref-Adv will foster more research in this area. Acknowledgements We would like to thank Volkan Cirik, Licheng Yu, Jiasen Lu for their help with GroundNet, MattNet and ViLBERT respectively, Keze Wang for his help with technical issues, and AWS AI data team for their help with Mechanical Turk. We are grateful to the anonymous reviewers for their useful feedback. 6560 References Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Neural module networks. In IEEE Conference on Computer Vision and Pattern Recognition, pages 39–48. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In IEEE international conference on computer vision, pages 2425–2433. Volkan Cirik, Taylor Berg-Kirkpatrick, and LouisPhilippe Morency. 2018a. Using syntax to ground referring expressions in natural images. In AAAI Conference on Artificial Intelligence. Volkan Cirik, Louis-Philippe Morency, and Taylor Berg-Kirkpatrick. 2018b. Visual referring expression recognition: What do systems actually learn? In North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, pages 781–787. Fartash Faghri, David J. Fleet, Jamie Ryan Kiros, and Sanja Fidler. 2018. VSE++: improving visualsemantic embeddings with hard negatives. In British Machine Vision Conference, page 12. Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating nlp models via contrast sets. arXiv preprint arXiv:2004.02709. Spandana Gella, Rico Sennrich, Frank Keller, and Mirella Lapata. 2017. Image pivoting for learning multilingual multimodal representations. In Empirical Methods in Natural Language Processing, pages 2839–2845. John Hewitt and Christopher D Manning. 2019. A structural probe for finding syntax in word representations. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4129–4138. Ronghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, and Kate Saenko. 2017. Modeling relationships in referential expressions with compositional modular networks. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1115–1124. Ronghang Hu, Huazhe Xu, Marcus Rohrbach, Jiashi Feng, Kate Saenko, and Trevor Darrell. 2016. Natural language object retrieval. In IEEE Conference on Computer Vision and Pattern Recognition, pages 4555–4564. Drew A Hudson and Christopher D Manning. 2019. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In IEEE Conference on Computer Vision and Pattern Recognition, pages 6700–6709. Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. 2020. Learning the difference that makes a difference with counterfactually-augmented data. In International Conference on Learning Representations. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. 2014. Referitgame: Referring to objects in photographs of natural scenes. In Empirical methods in natural language processing (EMNLP), pages 787–798. Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. 2014. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32–73. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer. Runtao Liu, Chenxi Liu, Yutong Bai, and Alan L. Yuille. 2019. Clevr-ref+: Diagnosing visual reasoning with referring expressions. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, pages 4185–4194. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. arXiv preprint arXiv:1908.02265. Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015. Multi-task sequence to sequence learning. arXiv preprint arXiv:1511.06114. Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. 2016. Generation and comprehension of unambiguous object descriptions. In IEEE conference on computer vision and pattern recognition, pages 11–20. Varun K. Nagaraja, Vlad I. Morariu, and Larry S. Davis. 2016. Modeling context between objects for referring expression understanding. In European Conference on Computer Vision, pages 792–807. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A 6561 cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Association for Computational Linguistics, pages 2556–2565. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67–78. Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L Berg. 2018. Mattnet: Modular attention network for referring expression comprehension. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1307– 1315. Yuke Zhu, Oliver Groth, Michael Bernstein, and Li FeiFei. 2016. Visual7w: Grounded question answering in images. In IEEE Conference on Computer Vision and Pattern Recognition, pages 4995–5004. A Appendix In this supplementary material, we begin by providing more details on RefCOCOg dataset to supplement Section 2 of the main paper. We then provide Ref-Adv annotation details, statistics, analysis, and random examples, to supplement Section 3 of the main paper. Finally, we provide details of our models (initialization & training, hyper-parameters) and show additional results to supplement Section 5 of the main paper. A.1 RefCOCOg vs Other Referring Expressions Datasets RefCOCO, RefCOCO+ (Kazemzadeh et al., 2014) and RefCOCOg (Google-RefCOCO; Mao et al. 2016) are three commonly studied visual referring expression recognition datasets for real images. All the three data sets are built on top of MSCOCO dataset (Lin et al., 2014) which contains more than 300,000 images, with 80 categories of objects. RefCOCO, RefCOCO+ were collected using online interactive game. RefCOCO dataset is more biased towards person category. RefCOCO+ does not allow the use of location words in the expressions, and therefore contains very few spatial relationships. RefCOCOg was not collected in an interactive setting and therefore contains longer expressions. For our adversarial analysis, we chose RefCOCOg for the following three important reasons: Firstly, expressions are longer (by 2.5 times on average) in RefCOCOg and therefore contains more spatial relationships compared to other two datasets. Secondly, RefCOCOg contains at least 2 to 4 instances of the same object type within the same image referred by an expression. This makes the dataset more robust, and indirectly puts higher importance on grounding spatial relationships in finding the target object. Finally, as shown in Table 4, RefCOCO and RefCOCO+ are highly skewed towards Person object category (≈50%) whereas RefCOCOg is relatively less skewed (≈36%), more diverse, and less biased. A.2 Importance of Linguistic Structure Cirik et al. (2018b) observed that existing models for RefCOCOg are relying heavily on the biases in the data than on linguistic structure. We perform extensive experiments to get more detailed insights into this observation. Specifically, we distort linguistic structure of referring expressions in the Re6562 RefCOCO RefCOCO+ RefCOCOg Outdoor 0.89% 0.88% 1.65% Food 10.16% 10.07% 8.10% Indoor 3.10% 3.09% 2.59% Appliance 0.67% 0.68% 1.03% Kitchen 3.95% 3.95% 5.40% Accessory 2.33% 2.33% 2.85% Person 49.50% 49.70% 37.02% Animal 13.26% 13.27% 15.05% Vehicle 7.23% 7.22% 10.71% Sports 0.73% 0.74% 1.91% Electronic 1.94% 1.95% 2.56% Furniture 6.14% 6.12% 11.09% Table 4: Distribution of object categories in RefCOCO, RefCOCO+, and RefCOCOg datasets. fCOCOg test split and evaluate the SOTA models that are trained on original undistorted RefCOCOg training split. Similar to (Cirik et al., 2018b), we distort the test split using two methods: (a) randomly shuffle words in a referring expression, and (b) delete all the words in the expression except for nouns and adjectives. Table 5 shows accuracies for the models with (column 3 and 4) and without (column 2) distorted referring expressions. Except for the ViLBERT model(Lu et al., 2019), the drop in accuracy is not significant indicating that spatial relations are ignored in grounding the referring expression. Using the relatively robust ViLBERT model, we repeat this analysis on our splits Ref-Easy, RefHard and Ref-Adv. We randomly sampled 1500 expressions from each of these splits and then compare performance of ViLBERT on these three sets. As shown in Table 6, we find a large difference in model’s accuracy on Ref-Hard and Ref-Adv. This clearly indicates that grounding expressions in both of these splits require linguistic and spatial reasoning. A.3 Ref-Adv Annotation We construct Ref-Adv by using all the 9602 referring expressions from RefCOCOg test data split. As shown in Figure 5, we follow a three stage approach to collect these new samples: Stage 1: For every referring expression in RefCOCOg test split, we perturb its linguistic structure by shuffling the word order randomly. We show each of these perturbed expression along with imModel Original Shuf N+J CMN (Hu et al., 2017) 69.4 66.4 67.4 GroundNet (Cirik et al., 2018a) 65.8 57.6 62.8 MattNet (Yu et al., 2018) 78.5 75.3 76.1 ViLBERT (Lu et al., 2019) 83.6 71.4 73.6 Table 5: RefCOCOg test accuracies of SOTA models on (a) original undistorted split, (b) after randomly shuffling words (Shuf) in the referring expression, and (c) after deleting all the words except for nouns and adjectives (N+J). ViLBERT is relatively more robust than other baselines. Test Original Shuf N+J Ref-Easy 86.40 75.06 76.00 Ref-Hard 72.73 51.13 56.60 Ref-Adv 71.08 50.23 57.40 Table 6: Ref-Easy, Ref-Hard, and Ref-Adv test accuracies of ViLBERT on (a) original undistorted split, (b) after randomly shuffling words (Shuf) in the referring expression, and (c) after deleting all the words except for nouns and adjectives (N+J). ages and all object bounding boxes to five qualified Amazon Mechanical Turk (AMT) workers and ask them to identify the ground-truth bounding box for the shuffled referring expression. We hired workers from US and Canada with approval rates higher than 98% and more than 1000 accepted HITs. At the beginning of the annotation, we ask the turkers to go through a familiarization phase where they become familiar with the task. We consider all the image and expression pairs for which at least 3 out of 5 annotators failed to locate the object correctly (with IoU < 0.5 ) as hard samples (Ref-Hard). We refer to the image-expressions for which at least 3 out of 5 annotators were able to localize the object correctly as easy samples (Ref-Easy). On average, we found that humans failed to localize the objects correctly in 17% of the expressions. Stage 2: We take Ref-Hard images and ask turkers to generate adversarial expressions such that the target object is different from the original object. More concretely, for each of the hard samples, we identify the most confused image regions among human annotators as the target objects in stage 1. For each of these target objects, we then ask three 6563 Figure 5: Overview of our three-stage Ref-Adv construction process. Given the image, referring expression, groundtruth bounding boxes for all the samples in RefCOCOg test split, we first filter out the hard samples and then construct adversarial expressions using them. Please refer to section 2 for further detail. Ref-Easy 8034 samples Ref-Hard 1568 samples Ref-Adv 3704 samples Outdoor 1.21% 1.90% 1.97% Food 7.94% 9.80% 9.63% Indoor 2.81% 2.83% 2.76% Appliance 0.80% 1.07% 1.11% Kitchen 4.52% 5.73% 5.77% Accessory 3.20% 5.44% 5.29% Person 37.26% 20.88% 21.01% Animal 15.95% 13.92% 13.90% Vehicle 10.91% 10.40% 10.26% Sports 1.45% 5.04% 5.13% Electronic 2.62% 3.20% 3.31% Furniture 11.28% 19.73% 19.83% Table 8: Distribution of object categories in Ref-Easy, Ref-Hard, and Ref-Adv splits. Figure 6: Referring expression length distribution for Ref-Easy, Ref-Hard, Ref-Adv datasets. Referring Expressions 3704 Unique Images 976 Vocabulary 2319 Avg. Length of Expression 11.4 Table 7: Ref-Adv Statistics length distribution of Ref-Easy, Ref-Hard, and RefAdv. It should be noted that Ref-Adv expressions are longer on average than Ref-Easy and Ref-Hard. Distribution of object categories in Ref-Easy, RefHard and Ref-Adv is shown in Table 8. In comparison to Ref-Easy and Ref-Hard, Ref-Adv is more balanced and less biased towards Person category. Figure 7 shows the relative frequency of the most frequent spatial relationships in all the three splits. As we can see, Ref-Adv comprises of rich and diverse spatial relationships. In Table 2, we show random selection of the Ref-Easy, Ref-Hard, and Ref-Adv splits. A.5 Model and other Experiment Details A.5.1 Datasets GQA (Hudson and Manning, 2019) contains 22M questions generated from Visual Genome (Krishna et al., 2017) scene graphs. However, in our our multi-task training (MTL), we leverage only 1.42M questions that require reasoning on both linguistic structure and spatial relations. We filter these reFigure 5: Overview of our three-stage Ref-Adv construction process. Given the image, referring expression, groundtruth bounding boxes for all the samples in RefCOCOg test split, we first filter out the hard samples and then construct adversarial expressions using them. Please refer to section 2 for further detail. Referring Expressions 3704 Unique Images 976 Vocabulary 2319 Avg. Length of Expression 11.4 Table 7: Ref-Adv Statistics turkers to write a referring expression while retaining at least three content words (nouns and adjectives) in the original referring expression. This generates adversarial expressions for the original ground-truth Ref-Hard referring expressions. Stage 3: We filter out the noisy adversarial expressions generated in stage 2 by following a validation routine used in the generation of RefCOCOg dataset. We ask three additional AMT workers to select a bounding box to identify the target object in the adversarial expression and then remove the noisy samples for which the inter-annotator agreement among workers is low. The samples with at least 2 out of 3 annotators achieving IoU > 0.5 will be added to Ref-Adv dataset. A.4 Dataset Analysis, Comparison, and Visualization In Table 7 we summarize the size and complexity of our Ref-Adv split. Figure 6 shows expression length distribution of Ref-Easy, Ref-Hard, and RefAdv. It should be noted that Ref-Adv expressions are longer on average than Ref-Easy and Ref-Hard. Figure 6: Referring expression length distribution for Ref-Easy, Ref-Hard, Ref-Adv datasets. Distribution of object categories in Ref-Easy, RefHard and Ref-Adv is shown in Table 8. In comparison to Ref-Easy and Ref-Hard, Ref-Adv is more balanced and less biased towards Person category. Figure 7 shows the relative frequency of the most frequent spatial relationships in all the three splits. As we can see, Ref-Adv comprises of rich and diverse spatial relationships. In Table 2, we show random selection of the Ref-Easy, Ref-Hard, and Ref-Adv splits. A.5 Model and other Experiment Details A.5.1 Datasets GQA (Hudson and Manning, 2019) contains 22M questions generated from Visual Genome (Krishna et al., 2017) scene graphs. However, in our our multi-task training (MTL), we leverage only 1.42M questions that require reasoning on both linguistic structure and spatial relations. We filter these re6564 Figure 7: Relative frequency of the most frequent spatial relationships in Ref-Easy, Ref-Hard, and Ref-Adv Ref-Easy 8034 samples Ref-Hard 1568 samples Ref-Adv 3704 samples Outdoor 1.21% 1.90% 1.97% Food 7.94% 9.80% 9.63% Indoor 2.81% 2.83% 2.76% Appliance 0.80% 1.07% 1.11% Kitchen 4.52% 5.73% 5.77% Accessory 3.20% 5.44% 5.29% Person 37.26% 20.88% 21.01% Animal 15.95% 13.92% 13.90% Vehicle 10.91% 10.40% 10.26% Sports 1.45% 5.04% 5.13% Electronic 2.62% 3.20% 3.31% Furniture 11.28% 19.73% 19.83% Table 8: Distribution of object categories in Ref-Easy, Ref-Hard, and Ref-Adv splits. lational questions by applying the following constraint on question types: type.Semantic=‘rel’. We also apply this constraint for filtering the development set. We denote this subset as GQA-Rel. We considered GQA-Rel instead of GQA for two reasons: 1) GQA-Rel is a more related task to RefCOCOg; and 2) MTL training with the full GQA set is computationally expensive. For each question in the dataset, there exists a long answer (free-form text) and a short answer (containing one or two words). We only consider the short answers for the questions and treat the unique set of answers as output categories. While the full GQA dataset has 3129 output categories, GQA-Rel contains only 1842 categories. We follow Yu et al. (2018) in creating the train (80512 expressions), val (4896 expressions), and test (9602 expressions) splits of RefCOCOg. For all our experiments in this paper, we directly use the ground-truth bounding box proposals. A.5.2 Training ViLBERT Pre-training We used pre-trained ViLBERT model that is trained on 3.3 million image-caption pairs from Conceptual Captions dataset (Sharma et al., 2018).3 Single-Task Fine-tuning on RefCOCOg In order to fine-tune the baseline ViLBERT (Lu et al., 2019) model on RefCOCOg dataset, we pass the ViLBERT visual representation for each bounding box into a linear layer to predict a matching score (similar to RefCOCO+ training in Lu et al. 2019). We calculate accuracy using IoU metric (prediction is correct if IoU(predicted region, ground-truth region) > 0.5). We use a binary cross-entropy loss and train the model for a maximum of 25 epochs. We use early-stopping based on the validation performance. We use an initial learning rate of 4e-5 and use a linear decay learning rate schedule with warm up. We train on 8 Tesla V100 GPUs with a total batch size of 512. Negative Mining We used a batch size of 512 and randomly sample negatives from the minibatch for computational efficiency. We sampled 64 negatives from each batch for both Sum of Hinges and Max of Hinges losses. We fine-tune the margin 3ViLBERT 8-Layer model at the link https:// github.com/jiasenlu/vilbert_beta 6565 Split Before MTL After MTL GQA-Rel Dev 53.7% 56.0% GQA Dev 40.24% 42.1% GQA Test 36.64% 39.2% Table 9: Performance on GQA-Rel Dev, GQA-Dev and GQA-Test splits before and after MTL training with RefCOCOg (Note: MTL training for all the three rows is performed using GQA-Rel and RefCOCOg). ViLBERT Ref-Dev Ref-Test Ref-Adv Without TL and MTL 83.39 83.63 70.90 TL with VQA 82.26 84.14 72.96 TL with GQA 80.60 82.08 70.41 TL with GQA-Rel 81.05 83.12 70.78 MTL with VQA 81.20 82.10 70.82 MTL with GQA-Rel 83.45 84.30 73.92 Table 10: Comparing ViLBERT’s Multi-task Learning (MTL) with Transfer Learning (TL) experiments. RefDev and Ref-Test correspond to: RefCOCOg-Dev and RefCOCOg-Test splits respectively. parameters based on development split. We train the model for a maximum of 25 epochs. We use early-stopping based on the validation performance. We use an initial learning rate of 4e-5 and use a linear decay learning rate schedule with warm up. We train on 8 Tesla V100 GPUs with a total batch size of 512. Multi-Task Learning (MTL) with GQA-Rel The multi-task learning architecture is shown in Figure 3 in the main paper. The shared layers constitute transformer blocks (TRM) and coattentional transformer layers (Co-TRM) in ViLBERT (Lu et al., 2019). The task-specific layer for GQA task is a two-layer MLP and we treat it as a multi-class classification task and the task-specific layer for RER is a linear layer that predicts a matching score for each of the image regions given an input referring expression. The weights for the taskspecific layers are randomly initialized, whereas the shared layers are initialized with weights pretrained on 3.3 million image-caption pairs from Conceptual Captions dataset (Sharma et al., 2018). We use a binary cross-entropy loss for both tasks. Similar to Luong et al. (2015), during training, we optimize each task alternatively in mini-batches based on a mixing ratio. We use early-stopping based on the validation performance. We use an initial learning rate of 4e-5 for RefCOCOg and 2e5 for GQA, and use a linear decay learning rate schedule with warm up. We train on 4 RTX 2080 GPUs with a total batch size of 256. GQA MTL Results Table 3 in the main paper showed that MTL training with GQA-Rel significantly improved the performance of model on RefHard and Ref-Adv splits. In addition, we also observed a significant improvement in GQA-Rel development, GQA development and test splits as shown in the Table 9. A.5.3 Additional Experiments In this subsection, we present results of additional experiments using transfer learning (TL) and multitask learning (MTL) with ViLBERT on VQA, GQA, and GQA-Rel tasks. As shown in Table 10, TL with VQA showed slight improvement. However, TL with GQA, TL with GQA-Rel, and MTL with VQA did not show any improvements 4. 4We could not perform MTL with GQA as it requires large number of computational resources.
2020
586
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6566–6577 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6566 A Mixture of h −1 Heads is Better than h Heads Hao Peng♠ Roy Schwartz♦♠ Dianqi Li♣ Noah A. Smith♦♠ ♦Allen Institute for Artificial Intelligence ♠Paul G. Allen School of Computer Science & Engineering, University of Washington ♣Department of Electrical & Computer Engineering, University of Washington {hapeng,roysch,nasmith}@cs.washington.edu, [email protected] Abstract Multi-head attentive neural architectures have achieved state-of-the-art results on a variety of natural language processing tasks. Evidence has shown that they are overparameterized; attention heads can be pruned without significant performance loss. In this work, we instead “reallocate” them—the model learns to activate different heads on different inputs. Drawing connections between multi-head attention and mixture of experts, we propose the mixture of attentive experts model (MAE). MAE is trained using a block coordinate descent algorithm that alternates between updating (1) the responsibilities of the experts and (2) their parameters. Experiments on machine translation and language modeling show that MAE outperforms strong baselines on both tasks. Particularly, on the WMT14 English to German translation dataset, MAE improves over “transformer-base” by 0.8 BLEU, with a comparable number of parameters. Our analysis shows that our model learns to specialize different experts to different inputs.1 1 Introduction The transformer architecture and its variants achieve state-of-the-art performance across a variety of NLP tasks, including machine translation (Vaswani et al., 2017; Ott et al., 2018), language modeling (Radford et al., 2018; Baevski and Auli, 2019), semantic role labeling (Strubell et al., 2018), and more (Devlin et al., 2019; Liu et al., 2019b; Yang et al., 2019b). Under the hood, multihead attention provides the driving force: multiple separately parameterized attention functions act in parallel to contextualize the input representations; their outputs are then gathered by an affine transformation, and fed to onward computation. 1Our implementation is publicly available at https:// github.com/Noahs-ARK/MAE. Experts: Input Attention heads: H1 H2 H3 H4 g H1 H2 H4 H1 H2 H3 f1 f2 H1 H3 H4 f3 H2 H3 H4 f4 0.1 0.2 0.3 0.4 Figure 1: Illustration of MAE: a mixture of attentive experts. Each Hi box is an attention head in a given layer; there are h of them in total. Experts are groups of h −1 attention heads. MAE learns an input-dependent distribution of the experts (g). At each training step, a single expert is selected and updated (solid line); during the evaluation, experts’ outputs are linearly combined with weights produced by g. Recent efforts by Voita et al. (2019) and Michel et al. (2019) suggest that typical transformer networks are overparameterized, in the sense that at test time, many of the heads, or even a full layer (Fan et al., 2020), can be removed without significant loss in performance.2 In response to this observation, they propose to prune the unimportant attention heads in the model after it is trained, aiming for faster inference. In this paper, we ask whether, instead of reducing the model capacity, we can use it more effectively. We propose mixture of attentive experts (MAE). MAE retains all attention heads, and learns to activate different heads on different inputs (see illustration in Figure 1). We start by showing that multi-head attention can be seen as an uniform, input-agnostic mixture of experts (Jacobs et al., 1991), by grouping a subset of atten2We do not argue that overparameterization is bad for training. In fact, it may be necessary for successful optimization and good generalization (Neyshabur et al., 2014; Zhang et al., 2016; Soudry and Carmon, 2016, inter alia). Rather, we try to explore more efficient ways to use the modeling capacity, than, e.g., removing part of the model. 6567 tion heads as an expert (§2.2). We then introduce MAE, which instead of uniformly weighting the experts, complements the experts with a learned, input-dependent function that assigns their responsibilities (§2.3). To train MAE, we propose a two-step algorithm based on block coordinate descent (§3), which alternates between updating the experts’ responsibilities and their parameters. We evaluate MAE on machine translation and language modeling (§4). Our approach outperforms strong baselines on both; on the WMT14 English to German MT dataset, MAE outperforms transformer-base (Vaswani et al., 2017) by 0.8 BLEU with a negligible increase in the number parameters. Our analysis shows that MAE learns to encourage different experts to specialize on different inputs (§5). 2 MAE: Mixture of Attentive Experts This section describes MAE in detail. It is inspired by a mixture-of-experts view of multi-head attention, which we present in §2.2. Specifically, we show that multi-head attention can be viewed as a mixture of uniformly weighted experts, each consisting of a subset of attention heads. Based on this observation, we propose MAE, which learns to weight the experts (§2.3) depending on the input. We begin by laying out notation and necessary background in §2.1. 2.1 Background: Mixture of Experts Mixture of experts is a well-established technique for ensemble learning (Jacobs et al., 1991). It jointly trains a set of expert models {fi}k i=1 that are intended to specialize across different input cases. The outputs produced by the experts are aggregated by a linear combination, with a “gating function” g = [g1, . . . , gk] determining the importance of each expert in the final decision: MoE(x) = k X i=1 gi(x) · fi(x). (1) The gating function can be parameterized by, e.g., a neural network. We will also refer to g as the responsibilities or weights of the experts. 2.2 Multi-Head Attention: a Mixture-of-Experts Perspective Multi-head attention is the key building block for the state-of-the-art transformer architectures (Vaswani et al., 2017). At its core are multiple separately parameterized attention heads. An attention head takes as input a n-by-d matrix X, with each row being the vector representation of an input element. It contextualizes the input using a dot-product attention mechanism: eHi = softmax  XQiK⊤ i X⊤ XVi, (2) where Qi, Ki, and Vi are learned matrices,3 and the softmax normalizes row-wise. The outputs of attention heads are then concatenated and fed through a learned affine transformation: Z ≜MultiHead (X) = h eH1; . . . ; eHh i W (3) where W is a learned matrix, and h denotes the number of attention heads. We now present a different computation equivalent to Eq. 3, aiming for a smoother transition into following sections. Let Hi = eHiWi, where Wi is a block submatrix of W, i.e., W = [W⊤ 1 ; W⊤ 2 , . . . ; W⊤ h ]⊤. Then Z = h eH1; . . . ; eHh i W = h X i=1 Hi. (4) Eq. 4 provides a different view of the output computation of the multi-head attention: each attention head first projects the contextualized representation with a learned matrix (i.e., Hi = eHiWi), then their outputs are gathered with a sum (Eq. 4). We now show that this can be seen as a uniformly weighted mixture of experts. A mixture-of-experts perspective. Let us take a closer look at Eq. 4 and rewrite it: Z = 1 h −1 h X i=1 (−1 + h) Hi = 1 h −1  − h X i=1 Hi + h X i=1 h X j=1 Hj   = h X i=1 1 h |{z} gate gi h h −1  −Hi + h X j=1 Hj   | {z } expert fi (X; θi) . (5) Eq. 5 interprets multi-head attention as a mixture of h h−1  = h experts. It first constructs a set of h experts {fi(·; θi)}, with θi denoting fi’s param3Some authors explicitly distinguish queries, keys, and values (Vaswani et al., 2017). These inputs can sometimes differ, e.g., in encoder-decoder attention. We suppress such differences for clarity. 6568 eters. fi(·; θi) is a parameterized function of the input, which calculates a sum of the outputs by all but the ith attention head. This is achieved by subtracting Hi from Ph j=1 Hj, then scaling up the results by h/(h −1). The experts share part of the parameters: any two share h −2 attention heads. A uniform responsibility of 1/h is used. Discussion. Viewing multi-head attention through this MoE lens suggests some interesting consequences. One can replace the input-agnostic responsibility in Eq. 5 with a function over the input. Indeed, we have good reasons for doing so. Voita et al. (2019) and Michel et al. (2019) show that for transformer networks, a handful of important attention heads are sufficient to achieve good test-time performance. They propose to prune the rest using an input-agnostic procedure. Instead of doing so, here we see a potential alternative: keep all the heads, but only activate those that are important to the input. This motivates MAE, which we now introduce. 2.3 MAE: Learning to Weight Experts MAE is inspired by the connections between MoE and multi-head attention we draw in §2.2. On top of multi-head attention, MAE learns an inputdependent parameterized gating function g(·; φ) to complement the experts. More formally, the uniform responsibility 1/h in Eq. 5 is replaced by g(·; φ): given input X, MAE outputs h X i=1 gi(X; φ) · fi(X; θi). (6) Experts fi are the same as those in Eq. 5. g(·; φ) is parameterized with a multi-layer perceptron (MLP) followed by a softmax. It first averages X along the row (i.e., the sequence direction), and then feeds the results through a twolayer tanh-MLP. g(·; φ) outputs a normalized hdimensional vector using a softmax, indicating the responsibilities of the experts. It can be seen as a learned probability distribution over the experts. MAE can learn to assign more responsibility to the experts that are more important to the given input, allowing them to contribute more. MAE is applicable wherever multi-head attention is used. For example, in a machine translation experiment (§4.2), we replace with MAE all the multi-head attention in a transformer network, including the self-attention in all encoder and decoder layers, as well as those attending over the encoded source from the decoder. Each of them is separately treated as a mixture of experts, and has its own gating function. The additional parameter overhead is small: gating functions account for only 3–5% parameters of the full model (Appendix A). 3 Training MAE with Block Coordinate Descent It is straightforward to jointly train the experts and the gating functions in an MAE model using backpropagation. However, in line with previous observations (Shen et al., 2019), we empirically observe that this is prone to degenerate solutions where the gating functions tend to learn to similarly weight the experts (see §5.1).4 As a remedy, we propose a block coordinate descent (BCD) training. At a high level, training is decomposed into two interleaving steps: A G step updates the gating function g(·; φ), fixing the experts; an F step fixes the gating function and updates one randomly selected expert fi(·; θi).5 The computations for G and F steps differ: • In a G step, MAE outputs a linear combination of the experts’ outputs, and only updates the gating function’s parameters (Algorithm 1). No expert is updated. • An F step computes the experts’ responsibilities g(X), according to which an expert i is then sampled (Algorithm 2). MAE computes the output with fi, which is then updated, without updating the gating function or other experts.6 A non-differentiable sampling from g is involved in F steps. It does not create difficulties for the 4Besides the undesired degeneracy, we also find that the model suffers worse overfitting when θ and φ are jointly updated (Appendix B). One possible reason is that, compared to the standard multi-head attention, the learned gates give the model additional capacity to compensate for the experts’ errors with others’ outputs at training time, hurting generalization (Jacobs et al., 1991). Another common degeneracy of MoEs is the “rich get richer” where one of the experts is always picked and others ignored. As observed by Voita et al. (2019), this can happen when the experts are trained to be sparsely weighted. When tuning the hyperparameters, we observe the “rich get richer” degeneracy if the learning rate is set too large. 5For clarity, our discussion focuses on θ and φ. The rest of the model, e.g., the word embeddings in a transformer network, are updated along with θ. Training aims to minimize loss L over {θ, φ}. 6In mini-batch training, which we use in the experiments, different experts can be sampled for different instances in a mini-batch. This is because g depends on the inputs. This means that multiple experts will be updated in an F step, but each due to a subset of the examples in the mini-batch. 6569 Algorithm 1 A G step update for MAE, with step size η. 1: procedure MAEG(X) 2: Z ←Ph i=1 gi(X; φ) · fi(X; θi) 3: Forwardprop with Z and calculate L. 4: Calculate ∇φL with backprop. 5: φ ←φ −η · ∇φL. 6: end procedure Algorithm 2 An F step update for MAE, with step size η. 1: procedure MAEF(X) 2: Draw i ∼Cat(g(X; φ)) 3: Z ←fi(X; θi) 4: Forwardprop with Z and calculate L. 5: Calculate ∇θiL with backprop. 6: θi ←θi −η · ∇θiL. 7: end procedure backpropagation, since an F step never calculates the gradients w.r.t. φ. At test time, the computation is the same as that in a G step, i.e., MAE outputs a linear combination of the experts, weighted by g. Training time overhead. A straightforward training procedure is to, for each training instance, first take a G step, and then an F step. This doubles the forward propagation computation overhead. In practice, it is not necessary to take G steps as frequently as F steps, since they only update a small portion of the model. In the experiments, we take G steps one fifth as frequently as F steps: we make G updates every 5 epochs while always take F steps. In preliminary experiments, we find this reduces training time overhead without significant impact on the performance.7 Algorithm 3 summarizes the block coordinate descent training in a given epoch. Connections to dropout. In the above block coordinate descent training algorithm, an F step samples an expert to update, and ignores the rest in both forward and backward computation. It is reminiscent of dropout (Srivastava et al., 2014). Specifically, selecting expert fi is equivalent to 7In this way, training time for MAE is roughly 1.2 times longer than that of the transformer network it builds on. 8Although we assume supervised learning, we suppress the gold outputs for notational clarity. We slightly overload the notation and denote by Xi the training instance, although they cab also be the outputs of intermediate layers. Algorithm 3 Block coordinate descent (BCD) training for MAE, at epoch e. D denotes the training data.8 1: procedure BCD(D = {Xi}i, e) 2: for Xi ∈D do 3: ▷Take G steps every 5 epochs. 4: if e mod 5 = 0 then 5: MAEG(Xi) 6: end if 7: ▷Always do F step updates. 8: MAEF(Xi) 9: end for 10: end procedure dropping head i.9 In other words, the F steps (Algorithm 2) can be seen as a structured dropout applied to the attention heads, but with learned input-dependent drop probabilities. When g is a constant vector with elements 1/h, it recovers the head dropout, which is also explored by concurrent work (Fan et al., 2020). So far, we view MAE as a mixture of h experts, each consisting of h −1 attention heads. One can, of course, generalize this to other settings, e.g., mixing h h−2  experts, each containing h−2 heads. From the dropout view, this translates to dropping more attention heads: dropping t heads out of h is equivalent to applying a dropout with drop probability t/h, in the sense that their expected numbers of dropped units are the same. Despite the similarity between MAE and dropout, a key difference exists between the two: with the latter, the constant dropout probability is set a priori, while MAE uses a gating function g(·; φ) to calculate a learned, input-dependent dropout probability. 4 Experiments We empirically evaluate MAE on machine translation (§4.2) and language modeling (§4.3) benchmarks. We first introduce the compared models (§4.1). 4.1 Compared Models MAE is evaluated under two settings: • MAE-7 mixes 8 experts each with 7 attention heads. 9Recall from Eq. 5 that fi includes all but head i. 6570 • MAE-6 is similar to MAE-7, but mixes 8 2  = 28 experts each with 6 attention heads.10 We compare MAE to the following baselines. • BASE is a sequence-to-sequence model based on the transformer architecture. • NOBCD is the same model as MAE, but does not use block coordinate descent training. Instead, it jointly updates all experts and the gating function at training time, as discussed at the start of §3. • UNI-MAE-7 is similar to MAE but does not have parameterized gating functions. It builds on BASE, and mixes 8 experts, each with 7 attention heads. Constant uniform responsibilities are assigned to the experts. At each training step, it updates one uniformly sampled expert; at test time, the outputs of all experts are averaged according to Eq. 5. • UNI-MAE-6 mixes 28 6-attention-head experts, and is otherwise the same as UNIMAE-7. We refer the readers to Appendix A for implementation details. 4.2 Machine Translation Datasets. We experiment with two machine translation datasets: • WMT14 EN-DE (Bojar et al., 2014).11 Following previous practice (Vaswani et al., 2017) we train on WMT14, and designate newstest2013 and newstest2014 as development and test data respectively. Our preprocessing follows that of Vaswani et al. (2017) and Ott et al. (2018). A shared source-target vocabulary is used, with 32k byte pair encoding types (BPE; Sennrich et al., 2016). • IWSLT14 DE-EN (Cettolo et al., 2014).12 It is based on TED talks, and is much smaller compared to WMT14. We use the preprocessing from Edunov et al. (2018). Following previous practice, we use separate vocabularies for the source and target, with around 9K and 7K BPE types respectively. Table 1 summarizes some statistics of the datasets. 10Preliminary results show that mixing experts with fewer heads leads to underwhelming performance. We conjecture this is due to too strong a regularization effect (§3). 11https://drive.google.com/a/ haopeng.name/uc?export=download&id=0B_ bZck-ksdkpM25jRUN2X2UxMm8 12http://workshop2014.iwslt.org/. Data Train Dev. Test Vocab. WMT14 4.5M 3K 3K 32K IWSLT14 160K 7K 7K 9K/7K Table 1: Some statistics for WMT14 and IWSLT14 datasets. We use separate source and target vocabularies in IWSLT14 experiments. Evaluation. The models are evaluated using BLEU (Papineni et al., 2002). A beam search with beam size 5 is used. In the WMT14 experiments, we follow Vaswani et al. (2017), and apply a compound split postprocessing.13 Results. Table 2 summarizes WMT14 EN-DE translation test performance. The base and large sized transformer models are due to Vaswani et al. (2017). To control for compounding factors, we additionally compare to our implementation of the base sized model (BASE). It achieves slightly better performance than Vaswani et al. (2017), with a 0.3 BLEU edge. MAE-7 improves over the base transformer by 0.8 BLEU, obtaining similar performance to the large-size transformer of Vaswani et al. (2017) using less than a third as many parameters. Since we do not see similar improvement by UNI-MAE-7, we attribute this gain to inputdependent expert weighting. Having a smaller number of heads for each expert, MAE-6 slightly underperforms MAE-7, and so does UNI-MAE-6 in comparison to UNI-MAE-7. Finally, NOBCD gets worse performance than the transformer baseline, demonstrating the importance of the block coordinate decent training. We observe similar trends on the IWSLT14 DEEN dataset, summarized in Table 3. The BASE model here is similar to the base-sized transformer in the WMT14 experiment, but with a smaller hidden dimension. MAE-7 outperforms BASE by 0.9 BLEU. Interestingly, UNI-MAE-7 improves over BASE by 0.3 BLEU, possibly because the regularization effect of random expert selection training helps more on this smaller dataset.14 4.3 Token-level Language Modeling Dataset. We experiment with the WikiText-103 dataset (Merity et al., 2016). It contains articles 13https://github.com/tensorflow/ tensor2tensor/blob/master/tensor2tensor/ utils/get_ende_bleu.sh 14Selecting an expert can be seen dropping one attention head in training (§3). 6571 Model BLEU # Params. Base Transformer 27.3 65M Large Transformer 28.4 213M BASE 27.6 61M ‡NOBCD 27.5 63M †UNI-MAE-7 27.7 61M †UNI-MAE-6 27.6 61M †‡MAE-7 28.4 63M †‡MAE-6 28.1 63M Table 2: WMT14 EN-DE translation test performance on newstest2014. † randomly select an expert to update for each training instance, and ‡ learns a gating function to weight the experts. Transformer performance in the first two rows are due to Vaswani et al. (2017). Model BLEU # Params. BASE 34.6 39M ‡NOBCD 34.8 41M †UNI-MAE-7 34.9 39M †UNI-MAE-6 35.0 39M †‡MAE-7 35.5 41M †‡MAE-6 35.4 41M Table 3: IWSLT14 GE-DE test set performance. See Table 2 caption for indications of the superscripts. from English Wikipedia, with a 268K-sized vocabulary. The training/development/test data respectively have 103M/218K/246K tokens. Setting. Here the BASE model is the strong language model by Baevski and Auli (2019). It is based on a 16-layer transformer network; each multi-head attention layer has 8 heads. It uses different embedding dimensions for the tokens, based on their frequencies. We closely follow Baevski and Auli (2019) in terms of hyperparameters and training procedures. The readers are referred to their paper and Appendix A for further architecture and hyperparameter details. Notes on context size. Baevski and Auli (2019) study the effect of context window, i.e., the number of history tokens the model attends over. They find that using larger context sizes lead to better performance (Baevski and Auli, 2019, Table 5). Their best setting uses a 3,072 training context size, and 2,048 at test time (i.e., the model has access 2,048 tokens before predicting any token at test time). However, we are not able to train MAE, Model Perplexity # Params. ⋆BASE (B&A, 2019) 18.70 247M BASE (B&A, 2019) 19.03 247M ‡NOBCD 19.12 249M †UNI-MAE-7 19.26 247M †‡MAE-7 18.71 249M Table 4: Language modeling performance on WikiText-103 test set (lower is better). ⋆Trains/evaluates with 3,072/2,048 context sizes and therefore not directly comparable to other models which use 512/480 sized ones. See Table 2 caption for the indications of other superscripts. Bold font indicates the best performance using smaller context sizes. The first two rows are due to Table 5 of Baevski and Auli (2019). nor replicate their results, under this setting—our GPUs have far less memory, and it is impossible to even load a 3,072-token context chunk.15 Therefore we train and evaluate MAE and UNI-MAE-7 with smaller 512/480 context sizes, also explored by Baevski and Auli (2019), which allows for a head-to-head comparison. Results. Table 4 shows the perplexity on WikiText-103 test data. When trained under the same setting, MAE outperforms Baevski and Auli (2019) by more than 0.3 perplexity. Interestingly, despite the much smaller context at both training and test time, MAE matches the best setting by Baevski and Auli (2019). UNI-MAE-7 and NOBCD underperform the baseline (higher perplexity). 5 Analysis This section first empirically confirms that MAE learns to activate different experts on different inputs in §5.1. We then run a synthetic experiment to explore MAE’s potential in transfer learning (§5.2). 5.1 Does MAE Learn to Specialize the Experts? One of the appealing properties of MoE models is that they could learn to activate different experts, depending on what “expertise” is needed for the 15Baevski and Auli (2019) use NVIDIA Tesla V100 GPUs with 32GB memory, while we only have access to GeForce RTX 2080 Ti, with 11GB memory. 6572 Model BLEU Diff. UNI-MAE-7 26.6 One random expert 25.8±0.2 ↓0.8±0.2 NOBCD 26.7 Most specialized expert 26.0 ↓0.7 MAE-7 27.1 Most specialized expert 26.8 ↓0.3 Table 5: Performance decrease for different models on WMT14 development set when only one expert is used for each multi-head attention layer (5.1). input. Does MAE learn to do so? We empirically study this question, and present evidence indicating that it does, at least in part. We consider the encoders of the UNI-MAE-7, NOBCD, and the MAE-7 models trained on WMT14.16 We first study whether BCD training helps drifting MAE away from uniformly weighting the experts agnostic to the inputs. We treat the gating values as probabilities, and calculate their entropies: H(g) = −Ph i=1 gi · log gi, which are then averaged across different layers. The average entropy on the development set for MAE-7 is 1.91, lower than the 2.02 by the NOBCD model trained without BCD. In comparison, UNI-MAE-7 uniformly weights the experts and has the entropy of 2.08. This indicates that gating weights of MAE trained with BCD are more “focused” on one or a subset of experts than trained without. Second, we study whether MAE learns to specialize different experts for different inputs. To do so we attribute the development instances to the experts that maximize the gating weights. For the first encoder layer of MAE-7, the percentages of instances attributed to each of the 8 experts are relatively balanced: 13%, 14%, 9%, 16%, 10%, 15%, 10%, 12%.17 This suggests that all experts are assigned a substantial part of the input, and it is not the case that BCD leads to a “rich get richer” outcome. We then continue and explore whether MAE performs reasonably well when using only the most “specialized” experts. For each development instance, we select those experts maximizing the 16The same experiments can be done with the decoders, where the inputs to gating functions are German sentences. The authors lack German expertise, and interpretation of a following analysis would not have been possible for us. 17We observe similar trends in other layers. See Appendix C for more details. Expert 1 Expert 2 Expert 3 Expert 4 neumann bell candidacy veil debuted zero rose monument rental computing submission fox worthy decentralized palm unnerved landloards reuters roles remainder Expert 5 Expert 6 Expert 7 Expert 8 spoil menses romans odds anybody technological sticker heat endorsed inevitably outdated marvel reserve bet analyst ornate pending punk venues anticipating Table 6: Indicative tokens for each expert (§5.1). Tokens attributed to Expert 2 are mostly computer science terminology; trends for other experts are less clear. gating weights and ignore the rest, instead of linearly combining them as in Eq. 6. We see from Table 5 a 0.3 BLEU decrease under this setting. In comparison, NOBCD has a larger performance decrease of 0.7 BLEU. NOBCD’s performance drop is similar to that of UNI-MAE-7, for which we randomly select an expert at each layer and average the performance over 5 runs. These results support the proposition that MAE specializes better when trained with BCD. Finally, we search for the tokens that are more likely to activate each expert. We compute the pointwise mutual information (PMI; Church and Hanks, 1990) between tokens and experts: PMI(tokeni, expertj) = log p(tokeni, expertj) p(tokeni)p(expertj). Table 6 lists the most indicative tokens of each expert, for the first layer. While some of the terms for some experts seem loosely related (e.g., bell, reuters, and computing for expert 2, it is hard to find clear patterns in most of them. 5.2 MAE’s Potential in Transfer Learning: A Case Study We now turn to evaluate another property of MAE: its potential for data-efficient transfer learning, by only updating the gating functions, freezing the experts. We consider the pretrain-then-finetune setting. Due to computation limits, we are unable to explore MAE for pre-training contextual representations (Peters et al., 2018; Devlin et al., 2019). Rather, we focus on the following small-scale machine translation experiments. Setting. We explore finetuning on IWSLT14 EN-DE data, a MAE model pretrained on the 6573 20 40 60 80 100 % Training Data 31.0 31.2 31.4 31.6 31.8 32.0 32.2 Dev. BLEU Finetune G+ Finetune All Figure 2: IWSLT14 development performance of FTG+ and FTALL using different amount of training data (§5.2). When trained on less than 20% subset of the original training data, FTG+ outperforms FTALL. much larger WMT14 dataset.18 We compare three finetuning methods: • FTG finetunes the gating functions’ parameters (i.e., φ), keeping the rest frozen. • FTG+ updates the parameter matrix W in Eq. 4 in addition to φ. The rest of the model parameters are fixed. • FTALL updates all parameters. As a baseline, NOFT is the out-of-box pretrained model without any finetuning. SCRATCH trains a MAE model from scratch. Table 7 summarizes the IWSLT14 EN-DE development set performance. Surprisingly, NOFT already outperforms SCRATCH without any finetuning. We attribute this improvement to the larger pretraining (WMT14) data. Only updating the gating functions, FTG improves over NOFT by 0.8 BLEU. Yet there is still a significant gap of 1.8 BLEU between FTG and FTALL. Interestingly, FTG+ almost matches the performance of FTALL, but only updates 1/9 as many parameters. Both FTG and FTG+ reach the best performance after around 1K gradient updates, i.e., one epoch, significantly less than FTALL or SCRATCH. We further compare FTG+ and FTALL where less downstream training data is available. To simulate this, we randomly sample [5%, 10%, 25%, 50%, 75%] subsets of IWSLT14 training data, on which the pretrained model is finetuned. Figure 2 plots their performance. We see a clear trend: as less training data is available, the gap between FTG+ and FTALL decreases; when less than 20% of the training data is available, FTG+ outperforms FTALL. These results suggest that finetuning MAE with FTG+ can be viable in lowresource transfer learning. 18Here we reverse the translation direction of IWSLT14: §4.2 experimented with DE-EN, here we use EN-DE. Method BLEU # Params. # Steps. SCRATCH 28.8 41M 52K NOFT 29.3 0 0 FTG 30.1 2M 1K FTG+ 31.6 7M 1K FTALL 31.8 63M 12K Table 7: IWSLT14 development set performance of different finetuning methods (§5.2). The last two columns indicate the number of parameters to update, and the number of gradient steps needed to achieve the best development performance. 6 Related Work Multi-head attention. An increasing amount of effort has been devoted into developing better attention mechanisms (Malaviya et al., 2018; Deng et al., 2018; Sukhbaatar et al., 2019; Correia et al., 2019; Maruf et al., 2019, inter alia), and improving transformer architectures (Shaw et al., 2018; Dehghani et al., 2019; Hao et al., 2019; Correia et al., 2019; Yang et al., 2019a, inter alia). Closely related, Iida et al. (2019) applies another attention mechanism over the attention heads, allowing a learned reweighting of them. Our work focuses on the connection between multi-head attention and MoE, and the BCD training it suggests and benefits from. Concurrent to our work, (Fan et al., 2020) study structurally pruning transformer layers for more efficient inference. Another line of work aims to better understand the working of transformer models (Clark et al., 2019; Liu et al., 2019a; Tenney et al., 2019, inter alia). Mixture of experts. One of the most successful applications of MoE is ensemble learning (Caruana et al., 2004; Liu et al., 2018; Dutt et al., 2017, inter alia). Recent efforts also explore MoE in sequence learning (Shazeer et al., 2017), and to promote diversity in text generation (He et al., 2018; Shen et al., 2019; Cho et al., 2019, inter alia). 7 Conclusion We presented MAE. It is inspired by a mixture-ofexperts perspective of multi-head attention. With a learned gating function, MAE activates different experts on different inputs. MAE is trained using a block coordinate descent algorithm, which alternates between updating the responsibilities of 6574 the experts and their parameters. Our experiments show that MAE outperforms the transformer baselines on machine translation and language modeling benchmarks. The analysis shows that MAE learns to activate different experts. The code is publicly available at https://github.com/ Noahs-ARK/MAE. Acknowledgments We thank the anonymous reviewers, Yoav Artzi, Mandar Joshi, Jungo Kasai, Lingpeng Kong, Kenton Lee, Kelvin Luu, Will Merrill, Phoebe Mulcaire, Mark Neumann, Nikos Pappas, Ofir Press, Lianhui Qin, Swabha Swayamdipta, Vivek Srikumar, Sam Thomson, and Dani Yogatama for their helpful feedback. This work was supported in part by NSF grant 1562364, a Google Fellowship, and NVIDIA Corporation through the donation of a Tesla GPU. References Alexei Baevski and Michael Auli. 2019. Adaptive input representations for neural language modeling. In Proc. of ICLR. Ondˇrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleˇs Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proc. of WMT. Rich Caruana, Alexandru Niculescu-Mizil, Geoff Crew, and Alex Ksikes. 2004. Ensemble selection from libraries of models. In Proc. of ICML. Mauro Cettolo, Jan Niehues, Sebastian St¨uker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th IWSLT evaluation campaign. In Proc. of IWSLT. Jaemin Cho, Minjoon Seo, and Hannaneh Hajishirzi. 2019. Mixture content selection for diverse sequence generation. In Proc. of EMNLP. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):22–29. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT’s attention. In Proc. of BlackBoxNLP. Gonc¸alo M. Correia, Vlad Niculae, and Andr´e F.T. Martins. 2019. Adaptively sparse transformers. In Proc. of EMNLP. Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Łukasz Kaiser. 2019. Universal transformers. Yuntian Deng, Yoon Kim, Justin Chiu, Demi Guo, and Alexander Rush. 2018. Latent alignment and variational attention. In Proc. of NeurIPS. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. of NAACL. Anuvabh Dutt, Denis Pellerin, and Georges Qu´enot. 2017. Coupled ensembles of neural networks. arXiv:1709.06053. Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marc’Aurelio Ranzato. 2018. Classical structured prediction losses for sequence to sequence learning. In Proc. of NAACL. Angela Fan, Edouard Grave, and Armand Joulin. 2020. Reducing transformer depth on demand with structured dropout. In Proc. of ICLR. Jie Hao, Xing Wang, Baosong Yang, Longyue Wang, Jinfeng Zhang, and Zhaopeng Tu. 2019. Modeling recurrence for transformer. In Proc. of NAACL. Xuanli He, Gholamreza Haffari, and Mohammad Norouzi. 2018. Sequence to sequence mixture model for diverse machine translation. In Proc. of CoNLL. Shohei Iida, Ryuichiro Kimura, Hongyi Cui, Po-Hsuan Hung, Takehito Utsuro, and Masaaki Nagata. 2019. Attention over heads: A multi-hop attention for neural machine translation. In Proc. of ACL: Student Research Workshop. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proc. of ICML. Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. 1991. Adaptive mixtures of local experts. Neural Computation, 3(1):79–87. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Linguistic knowledge and transferability of contextual representations. In Proc. of NAACL. Xuanqing Liu, Minhao Cheng, Huan Zhang, and ChoJui Hsieh. 2018. Towards robust neural networks via random self-ensemble. In Proc. of ECCV. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692. Chaitanya Malaviya, Pedro Ferreira, and Andr´e FT Martins. 2018. Sparse and constrained attention for neural machine translation. In Proc. of ACL. 6575 Sameen Maruf, Andr´e FT Martins, and Gholamreza Haffari. 2019. Selective attention for context-aware neural machine translation. In Proc. of NAACL. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv:1609.07843. Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In Proc. of NeurIPS. Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. 2014. In search of the real inductive bias: On the role of implicit regularization in deep learning. In Proc. of ICLR: Worshop Tack. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proc. of WMT. Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. of ACL. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2018. Language models are unsupervised multitask learners. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proc. of ACL. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proc. of NAACL. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv:1701.06538. Tianxiao Shen, Myle Ott, Michael Auli, and Marc’Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. In Proc. of ICML. Daniel Soudry and Yair Carmon. 2016. No bad local minima: Data independent training error guarantees for multilayer neural networks. arXiv:1605.08361. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 15(1):1929–1958. Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In Proc. of EMNLP. Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. 2019. Adaptive attention span in transformers. In Proc. of ACL. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proc. of ACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. of NeurIPS. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proc. of ACL. Baosong Yang, Longyue Wang, Derek F. Wong, Lidia S. Chao, and Zhaopeng Tu. 2019a. Convolutional self-attention networks. In Proc. of NAACL. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019b. XLNet: Generalized autoregressive pretraining for language understanding. arXiv:1906.08237. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2016. Understanding deep learning requires rethinking generalization. In Proc. of ICLR. 6576 Appendices A Architectures and Implementations Our model is implemented using the PyTorch toolkit and the fairseq codebase.19 Machine translation with WMT’14 Our BASE model in this experiment is the transformer-base by Vaswani et al. (2017). Its encoder and decoder are both of 6 transformer layers. Each multi-head attention layer is of hidden size 512, and uses 8 attention heads; the hidden dimensions for the feed forward networks are 2,048. We follow issue #346 of the fairseq’s GitHub repository to replicate the results by Vaswani et al. (2017).20 When training MAE, we mostly use the same hyperparameters, with the only exception being that we warmup the learning rate for 8,000 updates, instead of 4,000.21 At evaluation time, we apply early stopping based on development set loss, and then average the most recent 5 checkpoints of the model, following Vaswani et al. (2017). Machine translation with IWSLT’14. The BASE model in this experiment is due to the fairseq codebase.22 It mostly follows the transformer-base architecture, but uses a larger dropout rate (0.3 vs. 0.1), a smaller feed forward network hidden size (1,024 vs. 2,048), and a larger weight decay (10−4 vs. 0). We use 8,000 warmup updates. Language modeling with WikiText-103. For the BASE model, we follow the model by Baevski and Auli (2019). The learning rate is warmed up for 240, 000 steps. For all three experiments, the gating functions in our MAE model and the NOBCD baseline are implemented as tanh-MLPs. They have 256 hidden dimensions. We apply a batch normalization (Ioffe and Szegedy, 2015) to the input to the MLPs. We can see that the gating functions only have a small amount of parameters, accounting for less than 5% parameters of the full MAE model. A dropout of 0.1 is applied to the output of the first 19https://pytorch.org/; https://github. com/pytorch/fairseq 20https://github.com/pytorch/fairseq/ issues/346 21Due to the randomness in random expert selection, we find that warming up learning rate more slowly helps stabilize early training. 22https://github.com/pytorch/fairseq/ tree/master/examples/translation layer. No weight decay is used. φ are updated using SGD with a fixed learning rate 1, separate from the one for the rest part of the models. This aims to avoid using momentum-based optimizing algorithms (e.g., Adam) for the gating functions, which we empirically find helps alleviate the “rich gets richer” degeneracy.23 In the language modeling experiment, most recent 100 input vectors are averaged and then fed into the gating functions; while we average all the input vectors in the machine translation as the inputs to g(·; φ). B Learning Curve Comparison for MAE and NOBCD In §3 (footnote 4) we discuss an overfitting issue by jointly updating the experts and the gating function. This section empirically studies it. We compare the learning curves of BASE, NOBCD, and MAE-7 trained on the IWSLT14 dataset, plotted in Figure 3. The models are described in §4.1. We tune dropout and ℓ2 regularization based on development performance. Other hyperparameters are the same for the compared models. The training loss for NOBCD decreases much faster than BASE; however, on the development set, it never outperforms BASE, and the development loss starts increasing after epoch 40. MAE7 finds a nice middle ground in terms of training loss. It outperforms both BASE and NOBCD on the validation set. This provides further evidence for the importance of BCD training. C Addtional Results for §5.1 §5.1 describes a experiment with the MAE-7 model where we attribute the development instances of WMT14 to the experts maximizing the gating weights. Table 8 presents more results. The number of instances each expert receives is relatively balanced, and the trend is consistent across different layers. 23It is not entirely clear to us why using momentum-based optimization algorithms to learn the gating functions leads to degenerate solutions more often. One possible reason is that the accumulated momentum steers the gating functions to keeping selecting the experts they pick at the early stage of training. 6577 20 40 60 80 100 Epoch 3.0 3.5 4.0 4.5 5.0 Loss noBCD Training Loss noBCD Dev Loss Base Training Loss Base Dev Loss MAE Training Loss MAE Dev Loss Figure 3: Learning curves of BASE, NOBCD, and MAE-7 (§B), trained on the IWSLT14 EN-DE using the same setup. NOBCD quickly fits the training data, but it does not outperform BASE on validation set. Trained with BCD, MAE finds a nice middle ground. For better readability, x-axis starts at epoch 8. Layer E1 E2 E3 E4 E5 E6 E7 E8 1 13.1 13.9 8.9 16.1 10.3 15.3 10.1 11.6 2 13.8 14.5 10.7 10.8 15.4 7.9 16.0 10.9 3 14.0 14.4 12.4 10.6 14.3 9.8 15.4 9.0 4 14.5 13.7 10.4 8.3 15.1 11.8 11.2 15.1 5 11.9 13.8 13.7 15.7 10.1 16.4 6.9 11.5 6 12.9 10.0 12.4 14.6 9.5 15.2 15.7 9.8 Table 8: The percentage of WMT14 development instances attributed to each of the experts in MAE-7’s encoder layers (§5.1).
2020
587
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6578–6588 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6578 Dependency Graph Enhanced Dual-transformer Structure for Aspect-based Sentiment Classification Hao Tang1 , Donghong Ji1∗, Chenliang Li1 , Qiji Zhou1 1Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, China {tanghaopro,dhji,cllee,qiji.zhou}@whu.edu.cn Abstract Aspect-based sentiment classification is a popular task aimed at identifying the corresponding emotion of a specific aspect. One sentence may contain various sentiments for different aspects. Many sophisticated methods such as attention mechanism and Convolutional Neural Networks (CNN) have been widely employed for handling this challenge. Recently, semantic dependency tree implemented by Graph Convolutional Networks (GCN) is introduced to describe the inner connection between aspects and the associated emotion words. But the improvement is limited due to the noise and instability of dependency trees. To this end, we propose a dependency graph enhanced dual-transformer network (named DGEDT) by jointly considering the flat representations learnt from Transformer and graphbased representations learnt from the corresponding dependency graph in an iterative interaction manner. Specifically, a dualtransformer structure is devised in DGEDT to support mutual reinforcement between the flat representation learning and graph-based representation learning. The idea is to allow the dependency graph to guide the representation learning of the transformer encoder and vice versa. The results on five datasets demonstrate that the proposed DGEDT outperforms all state-of-the-art alternatives with a large margin. 1 Introduction Aspect-based or aspect-level sentiment classification is a popular task with the purpose of identifying the sentiment polarity of the given aspect (Yang et al., 2017; Zhang and Liu, 2017; Zeng et al., 2019). The goal is to predict the sentiment polarity of a given pair (sentence, aspect). Aspects in our study are mostly noun phrases appearing in the ∗Corresponding author. input sentence. As shown in Figure 1, where the comment is about the laptop review, the sentiment polarities of two aspects battery life and memory are positive and negative, respectively. Giving a specific aspect is crucial for sentiment classification owing to the situation that one sentence sometimes contains several aspects, and these aspects may have different sentiment polarities. Modern neural methods such as Recurrent Neural Networks (RNN), Convolutional Neural Networks (CNN) (Dong et al., 2014; Vo and Zhang, 2015) have already been widely applied to aspectbased sentiment classification. Inspired by the work (Tang et al., 2016a) which demonstrates the importance of modeling the semantic connection between contextual words and aspects, RNN augmented by attention mechanism (Bahdanau et al., 2015; Luong et al., 2015; Xu et al., 2015) is widely utilized in recent methods for exploring the potentially relevant words with respect to the given aspect (Yang et al., 2017; Zhang and Liu, 2017; Zeng et al., 2019; Wang et al., 2016). CNN based attention methods (Xue and Li, 2018; Li et al., 2018) are also proposed to enhance the phrase-level representation and achieved encouraging results. Although attention-based models have achieved promising performance on several tasks, the limitation is still obvious because attention module may highlight the irrelevant words owing to the syntactical absence. For example, given the sentence “it has a bad memory but a great battery life.” and aspect “battery life”, attention module may still assign a large weight to word “bad” rather than “great”, which adversely leads to a wrong sentiment polarity prediction. To take advantages of syntactical information among aspects and contextual words, Zhang et al. (2019) proposed a novel aspect-based GCN method which incorporates dependency tree into the attention models. Actually, using GCN (Kipf and 6579 Aspect: memory Sentiment: Negative Aspect: battery life Sentiment: Positive Figure 1: A typical utterance sample of aspect-based sentiment classification task with a proper dependency tree, notice that different aspects may have different sentiment polarities. Welling, 2017) to encode the information conveyed by a dependency tree has already been investigated in several fields, e.g., modeling document-word relationships (Yao et al., 2019) and tree structures (Marcheggiani and Titov, 2017; Zhang et al., 2018). As shown in Figure 1, an annotated dependency tree of original sentence is provided, and we can observe that word-aspect pairs (bad, memory) and (great, battery life) are well established. Direct application of dependency tree has two obvious shortcomings: (1) the noisy information is inevitably introduced through the dependency tree, due to imperfect parsing performance and the casualness of input sentence; (2) GCN would be inherently inferior in modeling long-distance or disconnected words in the dependency tree. It is reported that lower performance is achieved even with the golden dependency tree, by comparing against using only the flat structure (Zhang et al., 2019). To address these two challenges, we propose a dependency graph enhanced dual-transformer network (named DGEDT) for aspect-based sentiment classification. DGEDT consists of a traditional transformer (Vaswani et al., 2017) and a transformer-like structure implemented via a dependency graph based bidirectional GCN (BiGCN). Specifically, a dual-transformer structure is introduced in DGEDT to fuse the flat representations learnt by the transformer and the graph-based representations learnt based on the dependency graph. These two kinds of representations are jointly refined through a mutual BiAffine transformation process, where the dependency graph can guide and promote the flat representation learning. The final flat representations derived by the transformer is then used with an aspect-based attention for sentiment classification. We have conducted extensive experiments over five benchmark datasets. The experimental results demonstrate that the proposed DGEDT achieves a large performance gain over the existing state-of-the-art alternatives. To the best of our knowledge, the proposed DGEDT is the first work that jointly considers the flat textual knowledge and dependency graph empowered knowledge in a unified framework. Furthermore, unlike other aspect-based GCN models, we aggregate the aspect embeddings from multiple aspect spans which share the same mentioned aspect before feeding these embeddings into submodules. We also introduce an aspect-modified dependency graph in DGEDT. 2 Related Work Employing modern neural networks for aspectbased sequence-level sentiment classification task, such as CNNs (Kim, 2014; Johnson and Zhang, 2015), RNNs (Castellucci et al., 2014; Tang et al., 2016a), Recurrent Convolutional Neural Networks (RCNNs) (Lai et al., 2015), have already achieved excellent performance in several sentiment analysis tasks. Many attention-based RNN or CNN methods (Yang et al., 2017; Zhang and Liu, 2017; Zeng et al., 2019) are also proposed to handle sequence classification tasks. Tai et al. (2015) proposed a tree-LSTM structure which is enhanced with dependency trees or constituency trees, which outperforms traditional LSTM. Dong et al. (2014) proposed an adaptive recursive neural network using dependency trees. Since being firstly introduced in (Kipf and Welling, 2017), GCN has recently shown a great ability on addressing the graph structure representation in Natural Language Processing (NLP) field. Marcheggiani and Titov (2017) proposed a GCN-based model for semantic role labeling. Vashishth et al. (2018) and Zhang et al. 6580 (2018) used GCN over dependency trees in document dating and relation classification, respectively. Yao et al. (2019) introduced GCN to text classification task with the guidance of document-word and word-word relations. Furthermore, Zhang et al. (2019) introduced aspect-based GCN to cope with aspect-level sentiment classification task using dependency graphs. On the other hand, Chen and Qian (2019) introduced and adapted Capsule Networks along with transfer learning to improve the performance of aspect-level sentiment classification. Gao et al. (2019) introduced BERT into a target-based method, and Sun et al. (2019) constructed BERT-based auxiliary sentences to further improve the performance. 3 Preliminaries Since Transformer (Vaswani et al., 2017) and GCN are two crucial sub-modules in DGEDT, here we briefly introduce these two networks and illustrate the fact that GCN can be considered as a specialized Transformer. Assume that there are three input matrices Q ∈ Rn×dk, K ∈Rm×dk, V ∈Rm×dv, which represent the queries, keys and values respectively. n and m are the length of two inputs. Q′ = Attention(Q, K, V ) = softmax(QKT √dk )V, (1) where Q′ ∈Rn×dv, dk and dv are the dimension size of keys and values, respectively. Actually, Transformer adopts multi-head attention mechanism to further enhance the representative ability as follows: hi = Attention(QW Q i , KW K i , V W V i ), (2) Q′ = Concat([h1, ...])W O, (3) where i ∈[1, H], H is the head size, W Q i ∈ Rdk×dk/H, W K i ∈Rdk×dk/H, W V i ∈Rdv×dv/H and W O ∈Rdv×dv, and hi is the i-th head embedding. Then, two normalization layers are employed to extract higher-level features as follows: Q′ 1 = Norm(Q′ + Q), (4) Q′ 2 = Norm(Q′ 1 + FFN(Q′ 1)), (5) where FFN(x) = Relu(xW1 + b1)W2 + b2 is a two-layer multi-layer perceptron (MLP) with the activation function Relu, Norm is a normalization Dual-transformer Structure Max-Pooling Classify Attention Module Dependency Graph (Aspectmodified) Aspect Representation Aspect Span Aspect Span SUM SUM Aspect Representation BiLSTM/ BERT Input Figure 2: An overall demonstration of our proposed DGEDT. Aspect representation is accumulated from the embeddings in its aspect span, thus the attention module is also aspect-sensitive. layer, Q′ 2 is the output vector of this transformer layer. Equations (1)-(5) can be repeated for T times. Note that if Q = K = V , this operation can be considered as self alignment. As for GCN, the computation can be conducted as follows when the adjacent matrix of each word in the input is explicitly provided. Q′ = Norm(Q + Relu( 1 |Aadj|AadjQW)), (6) where Aadj ∈Rn×n is the adjacent matrix formed from the dependency graph, n is the number of words, Q ∈Rn×dk, W ∈Rdk×dk. 1 |Aadj|Aadj is similar to softmax(QKT √dk ) which is denoted as a generated alignment matrix, except for the main difference that Aadj is fixed and discrete. It is obvious that Equation (6) can be decomposed into Equations (1)-(4), and it can be also repeated for T times. In our perspective, GCN is a specialized Transformer with the head size set to one and the generated alignment matrix replaced by a fixed adjacent matrix. 4 DGEDT The network architecture of our proposed DGEDT is shown in Figure 2. For a given input text, we 6581 Input Embedding Self Attention Feed Forward Add&Norm Add&Norm BiGCN Add&Norm Flat (with graph) Graph (with flat) Mutual Biaffine Add&Norm Add&Norm T  Figure 3: A simplified demonstration of dualtransformer structure, which consists of two submodules, one is a standard transformer, another is a transformer-like structure implemented by BiGCN with the supervision of dependency graph. first utilize BiLSTM or Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) as the aspect-based encoder to extract hidden contextual representations. Then these hidden representations are fed into our proposed dual-transformer structure, with the guidance of aspect-modified dependency graph. At last, we aggregate all the aspect representations via maxpooling and apply an attention module to align contextual words and the target aspect. In this way, the model can automatically select relevant aspectsensitive contextual words with the dependency information for sentiment classification. 4.1 Aspect-based Encoder We use wk to represent the k-th word embedding. Bidirectional LSTMs (Schuster and Paliwal, 1997; Hochreiter and Schmidhuber, 1997) (BiLSTM) are applied for the encoder if we do not use BERT. h1, ... = Encoder([w1, ...]), (7) where hk ∈Rh is the k-th output of Encoder (BERT or BiLSTM), k ∈[1, N] and h is the hidden size, and N is the text length. Note that for a given aspect, there may exist M aspect mentions referring to the same aspect in the text. Also, each aspect mention could contain more than one word. To ease aspect-level representation in the later stage, we choose to collapse each aspect mention as a single word. The summation of the representations of each constituent word within the mention works as its hidden representation. We also develop a span set span with the size Ns. Each span records the start and end position of the given aspect. spanj denotes the j-th aspect span in original text. Note that for non-aspect words, spans involved in the computation are their original positions with the length as one. sj = SUM([hspanj]), (8) where j ∈[1, Ns], Ns <= N denotes the number of words after aspect-based sum operation. sj is the j-th output of the aspect-based encoder layer. This process can be illuminated by an example transforming ‘It has a bad memory but a great battery life’ to ‘It has a bad memory but a great [battery life]’. N is ten and Ns is nine in this case. 4.2 Dual-transformer Structure After obtaining the contextual hidden representations from the aspect-based encoder, we develop a dual-transformer structure to fuse the flat textual knowledge and dependency knowledge in a mutual reinforcement manner. Specifically, as demonstrated in Figure 3, dual-transformer structure consists of a multi-layer Transformer and a multi-layer BiGCN. Bidirectional GCN: We design a BiGCN by considering the direction of each edge in the dependency graph. Note that dependency graph is constructed on the word-level. Hence, similar to aspectlevel representation performed in Section 4.1, we merge the edges corresponding to the constituent word of the given aspect in the adjacent matrix, resulting in an aspect-level adjacent matrix. Then, we derive the graph-based representations for the input text as follows: Qt out = Relu( 1 |Aout adj|Aout adjQtWout), (9) Qt in = Relu( 1 |Ain adj|Ain adjQtWin), (10) Qt+1 = Norm(Qt + Relu([Qt out , Qt in]WO + bO)), (11) Qt+1 = BiGCN(Qt, Aout adj, Ain adj), (12) where Aout adj and Ain adj are outgoing and incoming aspect-level adjacent matrices gathered from the dependency graph respectively. Here, we concatenate 6582 the representations of two directions to produce the final output in each iteration, while other similar methods conduct the merging only in the last iteration. BiGCN represents Equations (9)-(11). We use a simple method to merge the adjacent matrix of the words in the same aspect span as follows: A′ adji = MIN(⃗1, SUM([Aadjspani])), (13) where Aadj can be replaced by Aout adj and Ain adj, and we can thus get Aout adj ′ and Ain adj ′. Each span records the start and end position of the given aspect. spani denotes the i-th span in original text. BiAffine Module: Assume that there are two inputs S1 ∈Rn×h and S2 ∈Rn′×h, we introduce a mutual BiAffine transformation process to interchange their relevant features as follows: A1 = softmax(S1W1ST 2 ), (14) A2 = softmax(S2W2ST 1 ), (15) S′ 1 = A1S2, (16) S′ 2 = A2S1, (17) S′ 1, S′ 2 = Biaffine(S1, S2), (18) where W1, W2 ∈Rh×h. Here, S′ 1 can be considered as a projection from S2 to S1, and S′ 2 follows the same principle. Biaffine represents Equations (14)-(17). A1 and A2 are temporary alignment matrices projecting from S2 to S1 and S1 to S2, respectively. The Whole Procedure: We can then assemble all the sub-modules mentioned above to construct our proposed dual-transformer structure, and the detailed procedures are listed below: STr′ t = Transfomer(STr t ), (19) SG′ t = BiGCN(SG t , Aout adj ′, Ain adj ′), (20) STr′′ t , SG′′ t = Biaffine(STr′ t , SG′ t ), (21) STr t+1 = Norm(STr′ t + STr′′ t ), (22) SG t+1 = Norm(SG′ t + SG′′ t ), (23) where STr 0 = SG 0 = H, and H ∈RNs×h denotes the contextual hidden representations {s1, ...} from the aspect-based encoder. Transfomer represents the process denoted by Equations (1)-(5). Equations (19)-(23) can be repeatedly calculated for T times and t ∈[0, T]. We choose STr T (flat (with graph) in Figure 3) as the last representation, because SG T (graph (with flat) in Figure 3) heavily depends on the dependency graph. 4.3 Aspect-based Attention Module Given M aspect representations can be obtained through the above mentioned procedure, we can derive the final aspect representation by Max-Pooling operation. Here, we utilize an attention mechanism to identify relevant words with respect to the aspect. However, these would be M aspect representations which are all highly relevant to the aggregated aspect representation. To avoid that these aspect mentions from being assigned with too high weight, we utilize a mask mechanism to explicitly set the attention values of aspect mentions to zeros. Let I be the index set of these M aspect mentions, we form Mask vector as follows: Maski = ( −inf, if i ∈I; 0, if other. (24) We then calculate the probability distribution p of the sentiment polarity as follows: hf = MaxPooling([STr T i|i ∈I]), (25) af = softmax(hfW3STr T T + Mask), (26) h′f = Relu([hf, afSTr T ]W ′ + b′), (27) p = softmax(h′fWp + bp), (28) where W3, W ′, Wp and b′, bp are learnable weights and biases, respectively. 4.4 Loss Function The proposed DGEDT is optimized by the standard gradient descent algorithm with the crossentropy loss and L2-regularization: Loss = − X (d,yp)∈D log(pyp) + λ||θ||2, (29) where D denotes the training dataset, yp is the ground-truth label and pyp means the yp-th element of p. θ represents all trainable parameters, and λ is the coefficient of the regularization term. 5 Experiments 5.1 Datasets Our experiments are conducted on five datasets, including one (Twitter) which is originally built by Dong et al. (2014), and the other four datasets (Lap14, Rest 14, Rest 15, Rest16) are respectively from SemEval 2014 task 4 (Pontiki et al., 2014), SemEval 2015 task 12 (Pontiki et al., 2015) and SemEval 2016 task 5 (Hercig et al., 2016), consisting 6583 Dataset Category Pos Neu Neg Twitter Train 1561 3127 1560 Test 173 346 173 Lap14 Train 994 464 870 Test 341 169 128 Rest14 Train 2164 637 807 Test 728 196 196 Rest15 Train 912 36 256 Test 326 34 182 Rest16 Train 1240 69 439 Test 469 30 117 Table 1: Detailed statistics of five datasets in our experiments. of data from two categories: laptop and restaurant. The statistics of datasets are demonstrated in Table 1. 5.2 Experiment Setup We compare the proposed DGEDT∗with a line of baselines and state-of-the-art alternatives, including LSTM, MemNet (Tang et al., 2016b), AOA (Huang et al., 2018), IAN (Ma et al., 2017), TNetLF (Li et al., 2018), CAPSNet (Chen and Qian, 2019), Transfer-CAPS (Chen and Qian, 2019), TGBERT (Gao et al., 2019), AS-CNN (Zhang et al., 2019) and AS-GCN (Zhang et al., 2019). We conduct the experiments with our proposed DGEDT with BiLSTM as the aspect-based encoder, and DGEDT +BERT with BERT as the aspect-based encoder. Several simplified variants of DGEDT are also investigated: DGEDT(Transformer) denotes that we keep standard Transformer and remove the BiGCN part, DGEDT(BiGCN) denotes that we keep BiGCN and remove the Transformer part. The layer number or iteration number (i.e., T) of all available models is set to three for both Transformer and GCN. We use Spacy toolkit† to generate dependency trees. 5.3 Parameter Settings We use BERT-base English version (Devlin et al., 2019), which contains 12 hidden layers and 768 hidden units for each layer. We use Adam (Kingma and Ba, 2014) as the optimizer for BERT and our model with the learning rate initialized by 0.00001 and 0.001 respectively, and decay rate of learning is set as 0.98. Except for the influence of decay rate, the learning rate decreases dynamically according to the current step number. Batch shuffling ∗available at https://github.com/tomsonsgs/DGEDT-sentimaster. † available at https://spacy.io/ is applied to the training set. The hidden size of our basic BiLSTM is 256 and the size of all embeddings is set as 100. The vocab size of BERT is 30,522. The batch size of all model is set as 32. As for regularization, dropout function is applied to word embeddings and the dropout rate is set as 0.3. Besides, the coefficient λ for the L2norm regularization is set as 0.0001. We train our model up to 50 epochs and conduct the same experiment for 10 times with random initialization. Accuracy and Macro-Averaged F1 are adopted as the evaluation metrics. We follow the experimental setup in (Zhang et al., 2019; Chen and Qian, 2019) and report the average maximum value for all metrics on testing set. If the model is not equipped with BERT, then we use word vectors that were pre-trained from Glove (Pennington et al., 2014). 5.4 Overall Results As shown in Table 2, our model DGEDT outperforms all other alternatives on all five dataset. BERT makes further improvement on the performance especially in Twitter, Rest14 and Rest 15. We can conclude that traditional Transformer DGEDT(Transformer) obtains better performance than DGEDT(BiGCN) in the most datasets. DGEDT employs and combines two sub-modules (traditional Transformer and dependency graph enhanced GCN) and outperforms any single submodule. Using dependency tree indeed contributes to the performance when acting as a supplement rather than a single decisive module. 5.5 Ablation Study Note that the performance of individual modules is already reported in Table 2. As shown in Table 3, we investigate and report four typical ablation conditions. ‘–Mask’ denotes that we remove the aspect-based attention mask mechanism, and ‘–MultiAspect’ denotes that we only use the aspect representation of the first aspect mention instead of MaxPooling them. We can see that these two procedures provide slight improvement. ‘– BiGCN(+GCN)’ means that we remove the bidirectional connection and only use original GCN, the results show that bidirectional GCN outperforms original GCN owing to the adequate connection information. ‘–BiAffine’ indicates that we remove the BiAffine process and use all the outputs of dual-transformer structure, we can thus conclude that BiAffine process is critical for our model, and utilizing simple concatenation of the 6584 Model Twitter Lap14 Rest14 Rest15 Rest16 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 LSTM 69.6 67.7 69.3 63.1 78.1 67.5 77.4 55.2 86.8 63.9 MemNet 71.5 69.9 70.6 65.2 79.6 69.6 77.3 58.3 85.4 66.0 AOA 72.3 70.2 72.6 67.5 80.0 70.4 78.2 57.0 87.5 66.2 IAN 72.5 70.8 72.1 67.4 79.3 70.1 78.6 52.7 84.7 55.2 TNet 73.0 71.4 74.6 70.1 80.4 71.0 78.5 59.5 89.1 70.4 AS-CNN 71.1 69.5 72.6 66.7 81.7 73.1 78.5 58.9 87.4 64.6 CAPSNet – – 72.7 68.8 78.8 69.7 – – – – Transfer-CAPS – – 73.9 70.2 79.3 70.9 – – – – AS-GCN 72.2 70.4 75.6 71.1 80.8 72.0 79.9 61.9 89.0 67.5 DGEDT(Transformer) 74.1 72.7 76.0 71.4 82.8 73.9 81.0 64.9 90.0 72.6 DGEDT(BiGCN) 72.8 71.0 76.2 71.8 81.8 72.5 80.4 62.9 89.4 70.4 DGEDT 74.8 73.4 76.8 72.3 83.9 75.1 82.1 65.9 90.8 73.8 TG-BERT 76.7 74.3 78.9 74.4 85.1 78.4 – – – – DGEDT-BERT 77.9 75.4 79.8 75.6 86.3 80.0 84.0 71.0 91.9 79.0 Table 2: Overall performance of accuracy and F1 on five datasets, AS means aspect-based. Ablation Twitter Lap14 Rest14 Rest15 Rest16 Acc Acc Acc Acc Acc DGEDT 74.8 76.8 83.9 82.1 90.8 –Mask 74.5 76.7 83.5 82.0 90.5 –MultiAspect 74.5 76.4 83.4 81.8 90.4 –BiGCN (+GCN) 74.3 76.2 83.2 81.4 90.2 –BiAffine 73.0 75.4 82.4 81.0 89.6 Table 3: Overall ablation results of accuracy on five datasets. (a) Lap14 Dataset. (b) Rest14 Dataset. Figure 4: A demonstration of accuracy-T curves on Lap14 and Rest 14 datasets respectively: T is the iteration number. outputs of Transformer and BiGCN is worse than DGEDT(Transformer). 5.6 Impact of Iteration Number As shown in Figure 4, we find that three is the best iteration number for Lap14 and Rest14. Dependency information will not be fully broadcasted when the iteration number is too small. The model will suffer from over-fitting and redundant information passing, which results in the performance drop when iteration number is too large. So, numerous experiments need to be conducted to figure out a proper iteration number. 5.7 Case Study and Attention Distribution Exploration As shown in Figure 5, DGEDT and DGEDT(BiGCN) output correct prediction Negative while DGEDT(Transformer) fails for the sentence The management was less than accommodating. To figure out the essential cause, we demonstrate the attention of self alignment in Figure 5. We can see that for the aspect management, DGEDT(Transformer) mainly focuses on accommodating, which is a positive word at document level. Thus, DGEDT(Transformer) obtains an incorrect prediction Positive. In the dependency tree, less which is often regarded as a negative word has a more related connection with aspect management, so DGEDT(BiGCN) outputs right sentiment Negative. With the assistance of supplementary dependency graph, DGEDT also obtains right prediction Negative owing to the high attention value between management and less. As shown in Figure 6, DGEDT and DGEDT(Transformer) output correct prediction Positive while DGEDT(BiGCN) fails for the sentence This little place is wonderfully warm welcoming. To figure out the essential cause, we demonstrate the attention of self alignment and dependency tree in Figure 6. We can see that for the aspect place, DGEDT(Transformer) mainly focuses on wonderfully, which is a positive word at document level. Thus, DGEDT(Transformer) obtains a correct prediction Positive. In the dependency tree, little which is often regarded as a negative word has a more related connection with aspect place, so DGEDT(BiGCN) outputs incorrect sentiment Negative. With the disturbance of inappropriate dependency tree, DGEDT still 6585 Aspect: management Golden: Negative DGEDT(Transformer): Positive DGEDT(BiGCN): Negative DGEDT: Negative (a) The attention matrix of self alignment by DGEDT(Transformer). (b) The attention matrix of self alignment by DGEDT. Figure 5: Case Study 1: A testing example demonstrates that the information of dependency tree contributes to the classification performance, our dual-transformer model generates a proper attention distribution with the assistance of dependency tree. Darker cell color indicates higher attention value, the aspect is management and golden sentiment is Negative. Aspect: place Golden: Positive DGEDT(Transformer): Positive DGEDT(BiGCN): Negative DGEDT: Positive (a) The attention matrix of self alignment by DGEDT(Transformer). (b) The attention matrix of self alignment by DGEDT. Figure 6: Case Study 2: A testing example demonstrates that the information of dependency tree may be harmful for the classification performance, and our dual-transformer model still obtains a proper attention distribution. Darker cell color indicates higher attention value, the aspect is place and golden sentiment is Positive. 6586 obtains right prediction Positive owing to the high attention value between place and wonderfully. We can see from two examples above that DGEDT is capable of achieving the proper balance between dependency graph enhanced BiGCN and traditional Transformer according to different situations. 6 Conclusion Recently neural structures with syntactical information such as semantic dependency tree and constituent tree are widely employed to enhance the word-level representation of traditional neural networks. These structures are often modeled and described by TreeLSTMs or GCNs. To introduce Transformer into our task and diminish the error induced by incorrect dependency trees, we propose a dual-transformer structure which considers the connections in dependency tree as a supplementary GCN module and a Transformer-like structure for self alignment in traditional Transformer. The results on five datasets demonstrate that dependency tree indeed promotes the final performance when utilized as a sub-module for dual-transformer structure. In future work, we can further improve our method in the following aspects. First, the edge information of the dependency trees needs to be exploited in later work. We plan to employ an edgeaware graph neural network considering the edge labels. Second and last, domain-specific knowledge can be incorporated into our method as an external learning source. Acknowledgments We thank the reviewers for their valuable comments. This work is supported through the grants from National Natural Science Foundation of China (NSFC-61772378), the National Key research and Development Program of China (No.2017YFC1200500) and the Major Projects of the National Social Science Foundation of China (No.11&ZD189). References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Giuseppe Castellucci, Simone Filice, Danilo Croce, and Roberto Basili. 2014. UNITOR: aspect based sentiment analysis with structured learning. In Proceedings of the 8th International Workshop on Semantic Evaluation, SemEval@COLING 2014, Dublin, Ireland, August 23-24, 2014, pages 761–767. The Association for Computer Linguistics. Zhuang Chen and Tieyun Qian. 2019. Transfer capsule network for aspect level sentiment classification. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 547–556. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics. Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 2: Short Papers, pages 49–54. The Association for Computer Linguistics. Zhengjie Gao, Ao Feng, Xinyu Song, and Xi Wu. 2019. Target-dependent sentiment classification with BERT. IEEE Access, 7:154290–154299. Tom’avs Hercig, Tom´as Brychc´ın, Luk´as Svoboda, and Michal Konkol. 2016. UWB at semeval-2016 task 5: Aspect based sentiment analysis. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016, San Diego, CA, USA, June 16-17, 2016, pages 342–349. The Association for Computer Linguistics. S Hochreiter and J Schmidhuber. 1997. Long shortterm memory. Neural Computation, 9(8):1735– 1780. Binxuan Huang, Yanglan Ou, and Kathleen M. Carley. 2018. Aspect level sentiment classification with attention-over-attention neural networks. In Social, Cultural, and Behavioral Modeling - 11th International Conference, SBP-BRiMS 2018, Washington, DC, USA, July 10-13, 2018, Proceedings, volume 10899 of Lecture Notes in Computer Science, pages 197–206. Springer. Rie Johnson and Tong Zhang. 2015. Semi-supervised convolutional neural networks for text categorization via region embedding. In Advances in Neural Information Processing Systems 28: Annual 6587 Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 919–927. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1746–1751. ACL. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Thomas N. Kipf and Max Welling. 2017. Semisupervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30, 2015, Austin, Texas, USA, pages 2267–2273. AAAI Press. Xin Li, Lidong Bing, Wai Lam, and Bei Shi. 2018. Transformation networks for target-oriented sentiment classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 946– 956. Association for Computational Linguistics. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1412–1421. The Association for Computational Linguistics. Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2017. Interactive attention networks for aspect-level sentiment classification. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages 4068– 4074. ijcai.org. Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1506–1515. Association for Computational Linguistics. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532–1543. ACL. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment analysis. In Proceedings of the 9th International Workshop on Semantic Evaluation, SemEval@NAACLHLT 2015, Denver, Colorado, USA, June 4-5, 2015, pages 486–495. The Association for Computer Linguistics. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. Semeval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation, SemEval@COLING 2014, Dublin, Ireland, August 23-24, 2014, pages 27–35. The Association for Computer Linguistics. Mike Schuster and Kuldip K. Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Trans. Signal Processing, 45(11):2673–2681. Chi Sun, Luyao Huang, and Xipeng Qiu. 2019. Utilizing BERT for aspect-based sentiment analysis via constructing auxiliary sentence. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 380–385. Association for Computational Linguistics. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 1556–1566. The Association for Computer Linguistics. Duyu Tang, Bing Qin, Xiaocheng Feng, and Ting Liu. 2016a. Effective lstms for target-dependent sentiment classification. In COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 3298– 3307. ACL. Duyu Tang, Bing Qin, and Ting Liu. 2016b. Aspect level sentiment classification with deep memory network. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 214–224. The Association for Computational Linguistics. Shikhar Vashishth, Shib Sankar Dasgupta, Swayambhu Nath Ray, and Partha P. Talukdar. 6588 2018. Dating documents using graph convolution networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1605– 1615. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998–6008. Duy-Tin Vo and Yue Zhang. 2015. Target-dependent twitter sentiment classification with rich automatic features. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 1347–1353. AAAI Press. Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based LSTM for aspectlevel sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 606–615. The Association for Computational Linguistics. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pages 2048–2057. JMLR.org. Wei Xue and Tao Li. 2018. Aspect based sentiment analysis with gated convolutional networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 2514–2523. Association for Computational Linguistics. Min Yang, Wenting Tu, Jingxuan Wang, Fei Xu, and Xiaojun Chen. 2017. Attention based LSTM for target dependent sentiment classification. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, February 4-9, 2017, San Francisco, California, USA, pages 5013–5014. AAAI Press. Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Graph convolutional networks for text classification. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 7370–7377. AAAI Press. Jiangfeng Zeng, Xiao Ma, and Ke Zhou. 2019. Enhancing attention-based LSTM with position context for aspect-level sentiment classification. IEEE Access, 7:20462–20471. Chen Zhang, Qiuchi Li, and Dawei Song. 2019. Aspect-based sentiment classification with aspectspecific graph convolutional networks. CoRR, abs/1909.03477. Yue Zhang and Jiangming Liu. 2017. Attention modeling for targeted sentiment. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017, Valencia, Spain, April 3-7, 2017, Volume 2: Short Papers, pages 572–577. Association for Computational Linguistics. Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2205–2215. Association for Computational Linguistics.
2020
588
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6589–6599 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6589 Differentiable Window for Dynamic Local Attention Thanh-Tung Nguyen∗†¶, Xuan-Phi Nguyen∗†¶, Shafiq Joty¶§, Xiaoli Li† ¶Nanyang Technological University §Salesforce Research Asia †Institute for Infocomm Research, A-STAR Singapore {ng0155ng@e.;nguyenxu002@e.;srjoty@}ntu.edu.sg [email protected] Abstract We propose Differentiable Window, a new neural module and general purpose component for dynamic window selection. While universally applicable, we demonstrate a compelling use case of utilizing Differentiable Window to improve standard attention modules by enabling more focused attentions over the input regions. We propose two variants of Differentiable Window, and integrate them within the Transformer architecture in two novel ways. We evaluate our proposed approach on a myriad of NLP tasks, including machine translation, sentiment analysis, subject-verb agreement and language modeling. Our experimental results demonstrate consistent and sizable improvements across all tasks. 1 Introduction Computing relative importance across a series of inputs can be regarded as one of the important advances in modern deep learning research. This paradigm, commonly known as attention (Bahdanau et al., 2015), has demonstrated immense success across a wide spectrum of applications. To this end, learning to compute contextual representations (Vaswani et al., 2017), to point to the relevant part in the input (Vinyals et al., 2015), or to select windows or spans (Wang and Jiang, 2017) from sequences forms the crux of many modern deep neural architectures. Despite aggressive advances in developing neural modules for computing relative relevance (Luong et al., 2015; Chiu and Raffel, 2018), there has been no general purpose solution for learning differentiable attention windows. While span selectionbased pointer network models typically predict a start boundary and an end boundary (Wang and Jiang, 2017; Seo et al., 2017), these soft predictions generally reside at the last layer of the net∗*Equal contributions work and are softly optimized. To the best of our knowledge, there exists no general purpose component for learning differentiable windows within networks. Although the practical advantages of learning differentiable windows are plenty, this paper focuses on improving attentions with differentiable windows. The key idea is to enable more focused attention, leveraging dynamic window selection for limiting (and guiding) the search space for the standard attention modules to work within. This can also be interpreted as performing a form of dynamic local attention. We make several key technical contributions. First, we formulate the dynamic window selection problem as a problem of learning a discrete mask (i.e., binary values representing the window). By learning and composing left and right boundaries, we show that we are able to parameterize the (discrete) masking method. We then propose soft adaptations of the above mentioned, namely trainable soft masking and segment-based soft masking, which are differentiable approximations that can not only be easily optimized in an end-toend fashion, but also inherit the desirable properties of discrete masking. While these modules are task and model agnostic, we imbue the state-of-the-art Transformer (Vaswani et al., 2017) model with our differentiable window-based attention. To this end, we propose two further variants, i.e., multiplicative window attention and additive window attention for improving the Transformer model. Within the context of sequence transduction and self-attention based encoding, learning dynamic attention windows are beneficial because they can potentially eliminate noisy aggregation and alignment from large input sequences. On the other hand, it is good to note that hard attention (Xu et al., 2015b), which replaces the weight average of soft attention with a stochas6590 tic sampling model, tries to achieve similar ends, albeit restricted to token-level selection. Hence, our proposed differentiable windows are more flexible and expressive compared to hard attentions. We evaluate our Transformer model with differentiable window-based attention on a potpourri of NLP tasks, namely machine translation, sentiment analysis, language modeling, and subjectverb agreement. Extensive experimental results on these tasks demonstrate the effectiveness of our proposed method. Notably, on the EnglishGerman and English-French WMT’14 translation tasks, our method accomplishes improvements of 0.63 and 0.85 BLEU, respectively. On the Stanford Sentiment Treebank and IMDB sentiment analysis tasks, our approach achieves 2.4% and 3.37% improvements in accuracy, respectively. We further report improvements of 0.92% in accuracy and 2.13 points in perplexity on the subjectverb agreement and language modeling tasks, respectively. We make our code publicly available at https://ntunlpsg.github.io/project/ dynamic-attention/. 2 Background The attention mechanism enables dynamic selection of relevant contextual representations with respect to a query representation. It has become a key module in most deep learning models for language and image processing tasks, especially in encoder-decoder models (Bahdanau et al., 2015; Luong et al., 2015; Xu et al., 2015a). 2.1 Transformer and Global Attention The Transformer network (Vaswani et al., 2017) models the encoding and decoding processes using stacked self-attentions and cross-attention (encoderdecoder attentions). Each attention layer uses a scaled multiplicative formulation defined as: score(Q, K) = (QW Q)(KW K)T √ d (1) att(Q, K, V ) = S(score(Q, K))(V W V ) (2) where S(A) denotes the softmax operation over each row of matrix A, Q ∈IRnq×d is the matrix containing the nq query vectors, and K, V ∈ IRn×d are the matrices containing the n key and value vectors respectively, with d being the number of vector dimensions; W Q, W K, W V ∈IRd×d are the associated weights to perform linear transformations. To encode a source sequence, the encoder applies self-attention, where Q, K and V contain the same vectors coming from the output of the previous layer.1 In the decoder, each layer first applies masked self-attention over previous-layer states. The resulting vectors are then used as queries to compute cross-attentions over the encoder states. For cross-attention, Q comprises the decoder selfattention states while K and V contain the encoder states. The attention mechanism adopted in the Transformer is considered global since the attention context spans the entire sequence. 2.2 Windows in Attentions In theory, given enough training data, global attention should be able to model dependencies between the query and the key vectors well. However, in practice we have access to only a limited amount of training data. Several recent studies suggest that incorporating more focused attention over important local regions in the input sequence as an explicit inductive bias could be more beneficial. In particular, Shaw et al. (2018) show that adding relative positional biases to the attention scores (Eq. 1) increases BLEU scores in machine translation. Specifically, for each query qi ∈Q at position i and key kj ∈K at position j, a trainable vector ai,j = wmax(−τ,min(j−i,τ)) is added to the key vector before the query-key dot product is performed. The window size τ is chosen via tuning. Sperber et al. (2018) also consider local information by restricting self-attention to neighboring representations to improve long-sequence acoustic modeling. Although shown to be effective, their methods only apply to self-attention and not to cross-attention where the query vectors come from a different sequence. That said, Luong et al. (2015) are the first to propose a Gaussian-based local attention for crossattention. At each decoding step t, their model approximates the source-side pivot position pt as a function of the decoding state and the source sequence length. Then, local attention is achieved by multiplying the attention score with a confidence term derived from a N(pt, σ2) distribution. The aligned pivot pt and the variance σ2 (a hyperparameter) respectively represent the center and the size of the local window. 1Initially, Q, K, and V contain the token embeddings. 6591 Meanwhile, Yang et al. (2018) improve the method of Luong et al. (2015) by assigning a soft window weight (a Gaussian bias) to obtain a flexible window span. Despite effective, the aligned pivot position in the source is determined only by the decoder state, while the encoder states are disregarded - these should arguably give more relevant information regarding the attention spans over the source sequence. Besides, the confidence for local attention span may not strictly follow a normal distribution, but rather vary dynamically depending on the relationship between the query and the key. Furthermore, the approach of Luong et al. (2015) is only applicable to cross-attention while the one of Yang et al. (2018) works better only for encoder self-attention as shown in their experiments. Our proposed differentiable window approach to local attention addresses the above limitations of previous methods. Specifically, our methods are dynamic and applicable to encoder and decoder self-attentions as well as cross-attention, without any functional constraints. They incorporate encoder states into the local window derivation. They are also invariant to sequence length, which removes the dependence on global features from the local context extraction process. 3 Dynamic Differentiable Window Our proposed attention method works in two steps: (i) derive the attention span for each query vector to attend over, and (ii) compute the respective attention vector using the span. In this section, we present our approaches to step (i) by proposing trainable soft masking and segment-based soft masking. In the next section, we present our methods to compute the attention vectors. To give the necessary background to understand what can be expected from our method, we first present the discrete masking case. 3.1 Discrete Window Masking In this context, we seek to dynamically derive a boolean mask vector for each query that will indicate the window in the key-sequence over which the query should attend. In other words, attentions are only activated on the consecutive positions where the mask vector element is 1, and the positions with 0 are canceled out. Let the query vector and the keysequence be q ∈IRd and K = (k1, k2, . . . , kn), respectively. Formally, we define the local attention mask vector mq ∈{0, 1}n for the query q as φT lq φT rq flq = φT lqLn grq = φT rqLT n mq = flq ⊙grq Figure 1: Example of φ, f, and g vectors and how the mask vector mq can be derived for lq = 3 and rq = 8. follows. mi q = ( 1, if lq ≤i ≤rq 0, otherwise (3) where lq and rq denote the left and right positional indices that form a discrete window [lq, rq] over which the query attends. As such, in the standard global attention, lq = 1 and rq = n for all the query vectors, and in decoder self-attention, lq = 1 and rq = t for the query vector at decoding step t. To facilitate the construction of mq, we first define vectors φk, fk, gk and matrix Ln with entries as: φi k = ( 1, if i = k 0, otherwise ; f i k = ( 1, if i ≥k 0, otherwise gi k = ( 1, if i ≤k 0, otherwise ; Li,j n = ( 1, if i ≤j 0, otherwise (4) where φk ∈{0, 1}n denotes the one-hot representation for a boundary position k (from the left or right of a sequence), and fk, gk ∈{0, 1}n are the ‘rightward’ mask vector and ‘leftward’ mask vector, respectively; Ln ∈{0, 1}n×n denotes a unit-value (1) upper-triangular matrix with i and j being the row and column indices respectively. Figure 1 visualizes how these entities appear. Specifically, fk has entry values of 1’s for position k and its right positions, while gk has entry values of 1’s for position k and its left positions. As such, fk and gk can be derived from φk and Ln as follows. fk = φT k Ln; gk = φT k LT n (5) Note that fk can be interpreted as the cumulative sum across φk, while gk as the inverse cumulative sum across φk. Given the above definitions, the mask vector mq for a query q to attend over the window [lq, rq] in 6592 the key sequence such that 1 ≤lq ≤rq ≤n can be achieved by: mq = flq ⊙grq = (φT lqLn) ⊙(φT rqLT n) (6) where ⊙denotes element-wise multiplication. As shown in Figure 1, mq represents the intersection between flq and grq, and forms a masking span for the attention. 3.2 Trainable Soft Masking The above masking method is non-differentiable as φ is discrete, which makes it unsuitable in an end-to-end neural architecture. In our trainable soft masking method, we approximate the discrete onehot vector φ with a pointing mechanism (Vinyals et al., 2015).2 Specifically, given the query q and the key-sequence K as before, we define confidence vectors ˆφlq, ˆφrq ∈IRn as follows. ˆφlq = S(qT W Q L (KW K L )T √ d ) (7) ˆφrq = S(qT W Q R (KW K R )T √ d ) (8) where S is the softmax function as defined before, and W Q L , W K L , W Q R , W K R ∈IRd×d are trainable parameters. Eq. 7-8 approximate the left and right boundary positions of the mask vector for the query q. However, contrary to the discrete case, they do not enforce absolute cancellation or activation of attention weights on any position in the key-sequence. Instead, they assign a confidence score to each position. This allows the model to gradually correct itself from invalid assignments. Moreover, the softmax operations enable differentiability while maintaining the gradient flow in an end-to-end neural architecture. Note however that the left and right boundary concepts have now become ambiguous since the positions lq = arg max(ˆφlq) and rq = arg max(ˆφrq) are not guaranteed to conform to the constraint lq ≤rq. To understand its implication, lets first consider the discrete case in Eq. 6; the elementwise multiplication between flq and grq results in a zero vector for mq if lq > rq, canceling out the attention scores entirely. Although not absolute zeros, 2However, unlike the standard pointer network, in our case there is no direct supervision for learning the pointing function. Our network instead learns it from the end prediction task. in the continuous case, mq would potentially contain significantly small values, which renders the attention implausible. To address this, we compute the soft mask vector ˆmq as follows. ˆmq = (ˆφT lqLn) ⊙(ˆφT rqLT n) + (ˆφT rqLn) ⊙(ˆφT lqLT n) (9) This formulation has two additive terms; the former constructs the mask vector when lq ≤rq, whereas the latter is activated when lq > rq. This ensures a non-zero result regardless of lq and rq values. It can be shown that the values in ˆmq represent the expected value of the discrete flags in mq, i.e., ˆmq = E(mq); see Appendix for a proof. We concatenate the mask vectors horizontally for all the query vectors in Q ∈IRm×d to get the mask matrix M ∈IRm×n. Since the pointing mechanism is invariant to sequence length, the computation of the mask vectors enjoys the same advantages, enabling our models to efficiently perform attentions on any arbitrarily long sequences. In addition, the method is applicable to all attention scenarios – from decoder to encoder cross-attention, encoder self-attention, and decoder self-attention. 3.3 Segment-Based Soft Masking The soft masking introduced above modulates the attention weight on each token separately which may result in unsmooth attention weights on neighbouring tokens. However, words in a sentence are related and they often appear in chunks or phrases, contributing to a shared meaning. Thus, it may be beneficial to assign identical mask values to the tokens within a segment so that they are equally treated in the window selection method. In this section, we propose a novel extension to our soft masking method that enables the mask vector to share the same masking values for the tokens within a segment in a key-sequence. The main idea is to divide the key-sequence K = (k1, k2, . . . , kn) into ⌈n/b⌉consecutive segments and to assign the same masking value to the tokens in a segment. The segment size b is considered a hyper-parameter. We compute the segment-based mask vector m′ q similarly as in Eq. 9, but with Ln replaced by Jn ∈IRn×n defined as follows. Ji,j n = ( 1, if i ≤b⌈j b⌉ 0, otherwise (10) 6593 Figure 2: Segment-based masking for segment size = 2. Instead of pointing to the left and right indices of the tokens, the soft segment-based method (approximately) points to the left and right boundaries of the segments, respectively. m′ q = (ˆφT lqJn) ⊙(ˆφT rqJT n ) + (ˆφT rqJn) ⊙(ˆφT lqJT n ) (11) Eq. 10 - 11 ensure that all the items in a segment share the same masking value, which is the cumulative sum of the confidence scores in ˆφlq and ˆφrq. For instance, suppose ˆφlq = (a1, a2, a3, . . . , an) and segment size b = 2, then the term ˆφT lqJn evaluates to (P2 i=1 ai, P2 i=1 ai, P4 i=1 ai, . . .), and ˆφT lqJT n evaluates to (Pn i=1 ai, Pn i=1 ai, Pn i=3 ai, . . .). Similarly, ˆφT rqJT n and ˆφT rqJn will have segment-level effects on the cumulative sums. Figure 2 visualizes the method with an example for b = 2. One advantage of this approach is that it allows us to control the masking behavior (by varying b) without increasing the number of parameters compared to the token-based masking. We also show its effectiveness in our experiments. 4 Dynamic Window Attention Methods Having presented our method to compute the mask vector that defines the attention spans, we now present our methods to incorporate the mask vectors into the attention layers. 4.1 Multiplicative Window Attention In this approach, the attention weights (Eq. 2) are (element-wise) multiplied by the mask matrix M to confine their attention scope defined by the mask. Formally, the attention scores and outputs are defined as follows. score = (QW Q)(KW K)T √ d (12) attMW = (S(score) ⊙M)(V W V ) (13) In this approach, the standard global attention weights are suppressed and partially overshadowed by the attention window imposed by M. Thus, it can be interpreted as a local attention method similar to Luong et al. (2015). However, instead of using a static Gaussian bias, we use a dynamic mask to modulate the attention weights. 4.2 Additive Window Attention Having a local attention window could be beneficial, but it does not rule out the necessity of global attention, which has been shown effective in many applications (Vaswani et al., 2017; Devlin et al., 2019). Thus, we also propose an additive window attention, which implements a combination of global attention and local attention. The attention output in this method is formally defined as sglb = (QW Q glb)(QW K glb)T (14) sloc = (QW Q loc)(QW K loc)T ⊙M (15) scoreAW = sglb + sloc √ d (16) attAW = S(scoreAW)(V W V ) (17) where W Q glb, W K glb, W Q loc, and W K loc ∈IRd×d are the weight matrices for global and local attentions. Compared to the multiplicative window attention where the mask re-evaluates the global attention weights, additive window attention applies the mask vector to the local attention scores (sloc), which is then added to the global attention scores (sglb) before passing it through the softmax function. In this way, the mask-defined local window does not suppress the global context but rather complements it with a local context. Moreover, the resulting attention weights add up to one, which avoids attention weights diminishment that could occur in the multiplicative window attention. Additive merger of global and local window components may also facilitate more stable gradient flows. 4.3 Implementation in Transformer We now describe how the proposed dynamic window attention methods can be integrated into the Transformer. Encoder, Decoder and Cross Attentions. Our proposed methods can be readily applied to the any of the attention layers in the Transformer framework. We could also selectively apply our methods to different layers in the encoder and decoder. 6594 In our initial experiments on WMT’14 EnglishGerman development set, we observed that the following settings provide more promising performance gains. First, encoder self-attention layers benefit most from additive window attention, while decoder self-attention layers prefer multiplicative attention. This shows that the global attention component is more useful when the key sequence is provided entirely in the encoder, while less useful when only the fragmented key sequence (past keys) is visible in the decoder. Second, the above argument is further reinforced as we found that cross-attention layers also prefer additive window attention, where the entire source sequence is available. Third, crossattention works better with segment-based masking, which provides smoothness and facilitates phrase (n-gram) based translations. Lower-layer Local Attentions. It has been shown that deep neural models learn simple word features and local syntax in the lower layers, while higher layers learn more complex contextdependent aspects of word semantics. Belinkov et al. (2017) show this on NMT models, while Peters et al. (2018) and Jawahar et al. (2019) show this on representation learning with ELMo and BERT respectively. In other words, local contextual information can still be derived in higher layers with the standard global attention. As such, we propose to apply our dynamic window attention methods only to the first 3 layers of the Transformer network, leaving the top 3 layers intact. Our diverse experiments in the following section support this setup as it offers substantial improvements, whereas using local attention in higher layers does not show gains, but rather increases model parameters. 5 Experiment In this section, we present the training settings, experimental results and analysis of our models in comparison with the baselines on machine translation (MT), sentiment analysis, subject verb agreement and language modeling (LM) tasks. 5.1 Machine Translation We trained our models on the standard WMT’16 English-German (En-De) and WMT’14 EnglishFrench (En-Fr) datasets containing about 4.5 and 36 million sentence pairs, respectively. For validation (development) purposes, we used newstest2013 for En-De and a random split from the training set for En-Fr. All translation tasks were evaluated against their respective newstest2014 test sets, in case-sensitive tokenized BLEU. We used byte-pair encoding (Sennrich et al., 2016) with shared source-target vocabularies of 32,768 and 40,000 sub-words for En-De and En-Fr translation tasks, respectively. We compare our models with three strong baselines: (i) Transformer Base (Vaswani et al., 2017), (ii) Transformer Base with Relative Position (Shaw et al., 2018), and (ii) Transformer Base with Localness Modeling (Yang et al., 2018). To ensure a fair comparison, we trained our models and the baselines with the following training setup. Training Setup. We followed model specifications in (Vaswani et al., 2017) and optimization settings in (Ott et al., 2018), with some minor modifications. Specifically, we used word embeddings of dimension 512, feedforward layers with inner dimension 2048, and multi-headed attentions with 8 heads. We trained our models on a single physical GPU but replicated the 8-GPU setup following the gradient aggregation method proposed by Ott et al. (2018). We trained the models for 200,000 updates for En-De and 150,000 updates for En-Fr translation tasks. Finally, we averaged the last 5 checkpoints to obtain the final models for evaluation. The segment size b in the segment-based masking method was set to 5.3 Translation Results. We report our translation results in Table 1; Enc(AW) indicates the use of additive window (AW) attention in the encoder, Dec(MW) indicates the use of multiplicative window (MW) attention in the decoder, and Cr(AW,Seg) indicates the use of additive window attention with segment-based masking for crossattention. The attention module that is not specified in our naming convention uses the default tokenbased global attention in the Transformer. For example, Enc(AW)-Dec(MW) refers to the model that uses AW attention in the encoder, MW attention in the decoder and the default global attention for cross attention. We notice that despite a minor increase in the number of parameters, applying our attentions in the encoder and decoder offers about 0.7 and 1.0 BLEU improvements in En-De and En-Fr translation tasks respectively, compared to the 3We did not tune b; tuning b might improve the results further. 6595 Model #-params En-De En-Fr Vaswani et al. (2017) 63M 27.46 39.21 Shaw et al. (2018) 63M 27.56 39.37 Yang et al. (2018) 63M 27.62 39.47 Our Models Enc(AW)-Dec(MW) 68M 28.11 40.24 Cr(AW, Seg) 65M 28.13 40.06 Enc(AW)-Cr(AW,Seg)-Dec(MW) 73M 28.25 40.32 Table 1: BLEU scores for different models in WMT’14 English-German and English-French translation tasks. Method Module Full (6 layers) Partial (3 layers) Transformer 27.46 AW Encoder 27.77 27.90 MW Encoder 27.25 27.40 AW Decoder 27.73 27.85 MW Decoder 27.88 28.04 AW Cross 27.78 27.97 MW Cross 27.58 27.79 Table 2: Evaluation of Additive Window (AW) and Multiplicative Window (MW) attentions in encoder/decoder self attention and cross attention for full vs. partial settings. Transformer base (Vaswani et al., 2017). Our model with the segment-based additive method for cross attention achieves a similar performance. We observe further improvements as we apply our attentions in all the attention modules of the Transformer. Specifically, our model Enc(AW)Cr(AW,Seg)-Dec(MW) achieves 28.25 and 40.32 BLEU in En-De and En-Fr translation tasks, outperforming Transformer base with localness (Yang et al., 2018) by 0.63 and 0.85 BLEU, respectively. 5.2 Ablation Study To verify our modeling decisions, we performed an ablation study in the WMT’14 En-De translation task. In particular, we evaluated (i) the impact of applying our differentiable window attentions in all layers vs. only in certain lower layers of the Transformer network, (ii) which window attention methods (additive or multiplicative) are suitable particularly for the encoder/decoder selfattention and cross-attention, and (iii) the impact of segment-based masking in different attention modules. (iv) training efficiency and performance of our best model with the similar models. Plus, to further interpret our window-based attention, we also provide the local window visualization. Full vs. Partial. Table 2 shows BLEU scores for the Transformer models that employ our windowModel Token-based Segment-based Cr(AW) 27.97 28.13 Enc(AW)-Dec(MW) 28.11 27.91 Table 3: BLEU scores for token- and segment-based masking in cross attention and encoder self-attention. The decoder self-attention always uses token-based masking. based attentions in all 6 layers (Full) vs. only in the first 3 layers (Partial), as well as the methods used in different attention modules (encoder/decoder self-attention, cross-attention). We can see that almost all the models with window-based methods in the first 3 layers outperform those that use them in all 6 layers. This gives the setup significant advantages as it performs not only better in BLEU but also requires less parameters. The results also show that multiplicative window (MW) attention is preferred in decoder selfattention, while additive window (AW) is more suitable for encoder self-attention and for crossattention. This suggests that the global context, which is maintained in AW, is more useful when it is entirely available like in encoder selfattention and cross attention. In contrast, incomplete and partially-generated context in decoder self-attention may induce more noise than information, where MW attention renders better performance than AW. Token- vs. Segment-based. Table 3 compares the results for using token-based vs. segment-based masking methods in different attention modules of the network. Note that it is preferred for decoder self-attention to adopt token-based masking since the decoder cannot point to unfinished segments in autoregressive generation, if it had used segmentbased masking. We see that segment-based additive window masking outdoes its token-based counterpart (28.13 vs. 27.97 BLEU) for crossattention. Meanwhile, for encoder self-attention, token-based masking performs better than segmentbased masking by 0.2 BLEU. This suggests that segments (or phrases) represent better translation units than tokens, justifying its performance superiority in cross-lingual attention but not in monolingual (self-attention) encoding. Speed and Parameters. As shown in table 4, our training efficiency is competitive to the baselines. That is, the training speed for our model is 1.04 6596 (a) Local masking scores (M). (b) Our attention scores. (c) Transformer attention scores. Figure 3: Visualization of masking scores, and attention scores for our and the original Transformer models. Model #-params # steps/sec BLEU Vaswani et al. (2017) 63M 1.20 27.46 Yang et al. (2018) 63M 1.07 27.62 Vaswani et al. (2017) 7 layers 69M 1.05 27.74 Vaswani et al. (2017) 8 layers 75M 0.99 27.89 Enc(AW)-Cr(AW,Seg)-Dec(MW) 73M 1.04 28.25 Table 4: Training efficiency and size of similar models steps/sec which is similar to Yang et al. (2018). Besides, our model outperforms the Transformer with 8 layers, which has more parameters. This suggests that our performance gain may not come from additional parameters, but rather from a better inductive bias through the dynamic window attention. Local Window Visualization. To further interpret our window-based attentions, Figure 3a shows the cross-attention soft masking values ( ˆmq) on the source tokens for each target token in an En-Fr test sample assigned by our Enc(AW)-Cr(AW,Seg)Dec(MW) model. The darker the score, the higher the attention is from a target token to a source token. We can see the relevant subwords are captured by the attentions quite well, which promotes ngram-level alignments. For instance, the mask ( ˆmq) guides the model to evenly distribute attention scores on sub-words “Co@@” and “en” (Fig. 3b), while standard attention is biased towards “Co@@” (Fig. 3c). Similar phenomenon can be seen for “Bro@@” and “thers” (towards “fr`eres”). 5.3 Text Classification We evaluate our models on the Stanford Sentiment Treebank (SST) (Socher et al., 2013), IMDB sentiment analysis (Maas et al., 2011) and SubjectVerb Aggreement (SVA) (Linzen et al., 2016) tasks. We compare our attention methods (incorporated into the Transformer encoder) with the encoders of Vaswani et al. ( 2017), Shaw et al. (2018) and Yang et al. (2018). Model STT IMDB SVA Vaswani et al. (2017) 79.36 83.65 94.48 Shaw et al. (2018) 79.73 84.61 95.27 Yang et al. (2018) 79.24 84.13 95.00 Enc (MW) 79.70 85.09 95.95 Enc (AW) 82.13 87.98 96.19 Table 5: Classification accuracy on on Stanford Sentiment Treebank (SST) and IMDB sentiment analysis and Subject-Verb Agreement(SVA) tasks. Training Setup. As the datasets are quite small compared to the MT datasets, we used tiny versions of our models as well as the baselines.4 Specifically, the models consist of a 2-layer Transformer encoder with 4 attention heads, 128 hidden dimensions and 512 feedforward inner dimensions. In these experiments, our attention methods are applied only to the first layer of the network. We trained for 3,000, 10,000 and 10,000 updates for SST, IMDB and SVA tasks, respectively on a single GPU machine. Results. Table 5 shows the results. Our multiplicative window approach (Enc (MW)) achieves up to 79.7%, 85.1% and 95.95% accuracy in SST, IMDB and SVA, exceeding Transformer (Vaswani et al., 2017) by 0.4%, 1.35% and 1.47%, respectively. Our additive window attention (Enc (AW)) renders even more improvements. Specifically, it outperforms Transformer with relative position (Shaw et al. 2018) by 2.4% and 3.37%, 0.92% reaching 82.13%, 87.98% and 96.19% accuracy in SST, IMDB and SVA, respectively. In fact, the results demonstrate consistent trends with our earlier MT experiments: additive window attention outdoes its multiplicative counterpart in the encoder, 4As specified in https://github.com/tensorflow/tensor2tensor. 6597 Model Perplexity Vaswani et al. (2017) 46.37 Shaw et al. (2018) 46.13 Dec (MW) 44.00 Dec (AW) 44.95 Table 6: Perplexity scores on 1-billion-word language modeling benchmark (the lower the better). where the entire key sequence is available. 5.4 Language Modeling Finally, to demonstrate our proposed methods as effective general purpose NLP components, we evaluate them on the One Billion Word LM Benchmark dataset (Chelba et al., 2013). The dataset contains 768 million words of data compiled from WMT 2011 News Crawl data, with a vocabulary of 32,000 words. We used its held-out data as the test set. Training Setup. As the LM dataset is considerably large, we used the same model settings as adopted in our MT experiments. For these experiments, we only trained the models on virtually 4 GPUs for 100,000 updates using gradient aggregation on a single GPU machine. Note that only the self-attention based autoregressive decoder of the Transformer framework is used in this task. Therefore, the method of Yang et al. (2018) is not applicable to this task. Results. Table 6 shows the perplexity scores. As can be seen, our multiplicative and additive window attention models both surpass Transformer (Vaswani et al., 2017) by 2.37 and 1.42 points respectively, reaching 44.00 and 44.95 perplexity scores respectively. In addition, it is noteworthy that similar to MT experiments, multiplicative attention outperforms the additive one on this task, where the decoder is used. This further reinforces the claim that where the global context is not fully available like in the decoder, the incomplete global context may induce noises into the model. Thus, it is effective to embrace dynamic local window attention to suppress the global context, for which the multiplicative window attention is designed. 6 Conclusion We have presented a novel Differential Window method for dynamic window selection, and used it to improve the standard attention modules by enabling more focused attentions. Specifically, we proposed Trainable Soft Masking and Segmentbased Masking, which can be applied to encoder/decoder self-attentions and cross attention. We evaluated our models on four NLP tasks including machine translation, sentiment analysis, subject verb agreement and language modeling. Our experiments show that our proposed methods outperform the baselines significantly across all the tasks. All in all, we demonstrate the benefit of incorporating the differentiable window in the attention. In the future, we would like to extend our work to make a syntactically-aware window that can automatically learn tree (or phrase) structures. Acknowledgments We would like to express our gratitude to Yi Tay and our anonymous reviewers for their insightful feedback on our paper. Shafiq Joty would like to thank the funding support from his Start-up Grant (M4082038.020). References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861–872, Vancouver, Canada. Association for Computational Linguistics. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. Technical report, Google. Chung-Cheng Chiu and Colin Raffel. 2018. Monotonic chunkwise attention. In International Conference on Learning Representations. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Ganesh Jawahar, Benoˆıt Sagot, and Djam´e Seddah. 2019. What does BERT learn about the structure 6598 of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651–3657, Florence, Italy. Association for Computational Linguistics. Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of lstms to learn syntaxsensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521–535. Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), EMNLP, pages 1412–1421. ACL. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation (WMT). Matthew Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. 2018. Dissecting contextual word embeddings: Architecture and representation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1499–1509, Brussels, Belgium. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725. Association for Computational Linguistics. Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464–468. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642. Association for Computational Linguistics. Matthias Sperber, Jan Niehues, Graham Neubig, Sebastian Stuker, and Alex Waibel. 2018. Self-attentional acoustic models. In Interspeech 2018, 19th Annual Conference of the International Speech Communication Association, Hyderabad, India, 2-6 September 2018., pages 3723–3727. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2692–2700. Curran Associates, Inc. Shuohang Wang and Jing Jiang. 2017. Machine comprehension using match-lstm and answer pointer. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015a. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2048–2057, Lille, France. PMLR. Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015b. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, pages 2048–2057. JMLR.org. Baosong Yang, Zhaopeng Tu, Derek F. Wong, Fandong Meng, Lidia S. Chao, and Tong Zhang. 2018. Modeling localness for self-attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4449– 4458, Brussels, Belgium. Association for Computational Linguistics. 6599 Appendix Proof: ˆmq = E(mq) The probability of left and right boundary for a query q: ˆφlq = S(qT W Q L (KW K L )T √ d ) (18) ˆφrq = S(qT W Q R (KW K R )T √ d ) (19) For any k, p(fk = 1) = p(lq ≤k) = X ˆφlq ≤k ˆφlq = (ˆφT lqLn)k (20) p(gk = 1) = p(rq ≥k) = X ˆφrq ≥k ˆφrq = (ˆφT rqLT n)k (21) Since fk and gk are binary values, ˆfk = p(fk = 1) = E(fk) (22) ˆgk = p(gk = 1) = E(gk) (23) Hence, ˆmq = ˆflq ⊙ˆgrq + ˆfrq ⊙ˆglq = E(mq) (24)
2020
589
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 625–638 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 625 Multi-Agent Task-Oriented Dialog Policy Learning with Role-Aware Reward Decomposition Ryuichi Takanobu, Runze Liang, Minlie Huang∗ Institute for AI, BNRist, DCST, Tsinghua University, Beijing, China {gxly19, liangrz15}@mails.tsinghua.edu.cn, [email protected] Abstract Many studies have applied reinforcement learning to train a dialog policy and show great promise these years. One common approach is to employ a user simulator to obtain a large number of simulated user experiences for reinforcement learning algorithms. However, modeling a realistic user simulator is challenging. A rule-based simulator requires heavy domain expertise for complex tasks, and a data-driven simulator requires considerable data and it is even unclear how to evaluate a simulator. To avoid explicitly building a user simulator beforehand, we propose Multi-Agent Dialog Policy Learning, which regards both the system and the user as the dialog agents. Two agents interact with each other and are jointly learned simultaneously. The method uses the actorcritic framework to facilitate pretraining and improve scalability. We also propose Hybrid Value Network for the role-aware reward decomposition to integrate role-specific domain knowledge of each agent in task-oriented dialog. Results show that our method can successfully build a system policy and a user policy simultaneously, and two agents can achieve a high task success rate through conversational interaction. 1 Introduction Dialog policy, which decides the next action that the dialog agent should take, plays a vital role in a task-oriented dialog system. More recently, dialog policy learning has been widely formulated as a Reinforcement Learning (RL) problem (Su et al., 2016; Peng et al., 2017; He et al., 2018; Zhao et al., 2019; Zhang et al., 2019; Takanobu et al., 2019), which models users as the interactive environment. Since RL requires much interaction for training, it is too time-consuming and costly to interact with real users directly. The most common way is first ∗Corresponding author to develop a dialog agent with a user simulator that mimics human behaviors in an offline scenario. Designing a reliable user simulator, however, is not trivial and often challenging as it is equivalent to building a good dialog agent. With the growing needs for the dialog system to handle more complex tasks, it will be much challenging and laborious to build a fully rule-based user simulator, which requires heavy domain expertise. Datadriven user simulators have been proposed in recent studies (Kreyssig et al., 2018; Shi et al., 2019), but they require a considerable quantity of manually labeled data, most of which regard the simulator as a stationary environment. Furthermore, there is no standard automatic metric for evaluating these user simulators, as it is unclear to define how closely the simulator resembles real user behaviors. In this paper, we propose Multi-Agent Dialog Policy Learning (MADPL), where the user is regarded as another dialog agent rather than a user simulator. The conversation between the user and the system is modeled as a cooperative interactive process where the system agent and the user agent are trained simultaneously. Two dialog agents interact with each other and collaborate to achieve the goal so that they require no explicit domain expertise, which helps develop a dialog system without the need of a well-built user simulator. Different from existing methods (Georgila et al., 2014; Papangelis et al., 2019), our approach is based on actor-critic framework (Barto et al., 1983) in order to facilitate pretraining and bootstrap the RL training. Following the paradigm of centralized training with decentralized execution (CTDE) (Bernstein et al., 2002) in multi-agent RL (MARL), the actor selects its action conditioned only on its local stateaction history, while the critic is trained with the actions of all agents. It should be noted that the roles of two agents are different though they interact with each other 626 Figure 1: The user has his/her own goal to be accomplished and the system is provided with an interface to access an external database. Both agents can only obtain information from the other side via communication. in a cooperative setting. As shown in Fig. 1, only the user agent knows the user goal, while only the system agent can access the backend database. The user agent should express the requirements completely in an organized way, and the system should respond with useful information accurately and immediately. So it is inappropriate to apply simple self-play RL (Silver et al., 2017; Lewis et al., 2017) that views two agents as the same agent in this task. To address this issue, the system and the user are viewed as two asymmetric agents in MADPL. We introduce Hybrid Value Network (HVN) for roleaware reward decomposition. It decomposes the reward into two parts: one is the role-specific reward that focuses on its local target, and the other is the global reward that represents the shared goal. To evaluate the proposed approach, we conduct our experiments on a multi-domain, multiintent task-oriented dialog corpus, MultiWOZ (Budzianowski et al., 2018). The corpus involves high dimensional state and action spaces, multiple decision making in one turn, which makes it more difficult to get a good system policy as well as a good user policy. The experiments demonstrate that MADPL can successfully build a system policy as well as a user policy with the aid of HVN, and two agents can achieve high task success rate in complex tasks by interacting with each other as well as with benchmark policies. To summarize, our contributions are in three folds: • We apply actor-critic based multi-agent reinforcement learning to learn the task-oriented dialog policy to facilitate pretraining and avoid explicitly building a user simulator. • We propose Hybrid Value Network for reward decomposition to deal with the asymmetric role issue between the system agent and the user agent in the task-oriented dialog. • We conduct in-depth experiments on the multidomain, multi-intent task-oriented dialog corpus to show the effectiveness, reasonableness and scalability of our algorithm. 2 Related Work 2.1 Multi-Agent Reinforcement Learning The goal of RL is to discover the optimal strategy π∗(a|s) of the Markov Decision Process, which can be extended into the N-agent setting, where each agent has its own set of states Si and actions Ai. In MARL, the state transition s = (s1, . . . , sN) → s′ = (s′ 1, . . . , s′ N) depends on the actions taken by all agents (a1, . . . , aN) according to each agent’s policy πi(ai|si) where si ∈Si, ai ∈Ai, and similar to single RL, each agent aims to maximize its local total discounted return Ri = P t γtri,t. Since two or more agents learn simultaneously, the agents continuously change as the training proceeds, therefore the environment is no longer stationary. Many MARL algorithms (Lowe et al., 2017; Foerster et al., 2018; Rashid et al., 2018) have been proposed to solve challenging problems. Most of them use the CTDE framework to address the non-stationarity of co-adapting agents. It allows the policies to use extra information to ease training, but the learned policies can only use local information (i.e. their own observations) at execution time. Several studies have demonstrated that applying MARL delivers promising results in NLP tasks these years. While some methods use identical rewards for all agents (Das et al., 2017; Kottur et al., 2017; Feng et al., 2018), other studies use completely separate rewards (Georgila et al., 2014; Papangelis et al., 2019). MADPL integrates two types of rewards by role-aware reward decomposition to train a better dialog policy in task-oriented dialog. 2.2 User Modeling in Task-Oriented Dialog User modeling is essential for training RL-based dialog models, because a large amount of dialog samples are required for RL policy learning, mak627 ing it impractical to learn with real users directly from the beginning. There are three main approaches for user modeling. The first approach is to build a rule-based user simulator. Among these methods, the most popular one is agenda-based simulator (Schatzmann et al., 2007; Shah et al., 2018), which is built on hand-crafted rules with a stack-like agenda based on the user goal. The second approach is to build a user simulator from the dialog data (Keizer et al., 2010; El Asri et al., 2016; Kreyssig et al., 2018). Recently, G¨ur et al. (2018) uses a variational hierarchical seq2seq framework to encode user goal and system turns, and then generate the user response. Shi et al. (2019) uses two decoders with a copy and attention mechanism to predict a belief span first and then decode user utterance. The third approach is to use model-based policy optimization that incorporates a differentiable model of the world dynamics and assumptions about the interactions between users and systems (Su et al., 2018; Zhang et al., 2019), but this approach still requires real users or a user simulator for world model learning. Instead of employing a user simulator, a few methods jointly learn two agents directly from the corpus. Liu and Lane (2017) models the system and the user by iteratively training two policies. Papangelis et al. (2019) make the first attempt to apply MARL into the task-oriented dialog policy, whose algorithm is based on Q-learning for mixed policies. However, it is not well scalable to complex tasks such as multi-domain dialog. Therefore, MADPL uses the actor-critic framework instead to deal with the large discrete action space in dialog. 3 Multi-Agent Dialog Policy Learning We first formally describe the task, and then present the overview of our proposed model. Specifically, given a user goal G=(C,R) composed of the user constraints C (e.g. a Japanese restaurant in the center of the city) and requests R (e.g. inquiry for address, phone number of a hotel), and given an external database DB containing all candidate entities and corresponding information, the user agent and system agent interact with each other in a dialog session to fulfill the user goal. There can be multiple domains in G, and two agents have to accomplish all the subtasks in each domain. Both agents can partially observe the environment, i.e. only the user agent knows G, while only the sysFigure 2: Architecture of MADPL. HVN consists of three critics. Each critic estimates its return based on role-aware reward decomposition, and each actor uses the estimated value to optimize itself. tem agent can access DB, and the only way to know each other’s information is through conversational interaction. Different from ordinary multiagent task setting, two agents in dialog are executed asynchronously. In a single dialog turn, the user agent posts an inquiry first, then the system agent returns a response, and the two communicate alternately. Therefore, each dialog session τ can be seen as a trajectory of state-action pairs {(sU 0 , aU 0 , sS 0 , aS 0 ); (sU 1 , aU 1 , sS 1 , aS 1 ); . . . }, where the user agent and the system agent make decisions according to each dialog policy µ(aU|sU), π(aS|sS) respectively. Here we present a novel algorithm, Multi-Agent Dialog Policy Learning (MADPL), as shown in Fig. 2, which can be naturally formulated as a MARL problem. Two agents interact through dialog acts following (Georgila et al., 2014). We choose the actor-critic framework in order to learn an explicitly stochastic dialog policy (actor) for high scalability along with an estimated value function (critic) to bootstrap RL training. Besides, this can facilitate imitation learning to pretrain the dialog policy using human-human dialogs. Since two agents cooperate to reach success, yet their roles are asymmetric in the dialog, we propose Hybrid Value Network (HVN) to decompose the task reward into different parts for better policy learning. Note that our approach is fully data-driven without building a user simulator beforehand, and does not need any other human supervision during training. In the subsequent subsections, we will first explain the state and action used in two dialog policies. Then we describe how we decompose the reward and the proposed HVN. At last, we present model optimization. 628 3.1 Dialog Policy System Policy The system policy π decides the system action aS according to the system dialog state sS to give the appropriate response to user agent. Each system action aS is a subset of dialog act set A as there may be multiple intents in one dialog turn. A dialog act is an abstract representation of an intention (Stolcke et al., 2000), which can be represented in a quadruple composed of domain, intent, slot type and slot value (e.g. [restaurant, inform, food, Italian]). In practice, dialog acts are delexicalized in the dialog policy. We replace the slot value with a count placeholder and refill it with the true value according to the entity selected from the external database DB, which allows the system to operate on unseen values. The system dialog state sS t at dialog turn t is the concatenation of (I) user action at current turn aU t ; (II) system action at the last turn aU t−1; (III) the belief state bt (Williams et al., 2016) that keeps track of constraint slots and request slots supplied by the user agent; and (IV) embedding vectors of the number of query results qt from DB. User Policy The user policy µ decides the user action aU according to the user dialog state sU to express its constraint and request to the system agent. Similar to the system policy, the user policy uses delexicalized dialog acts as actions, and the value is refilled according to the user goal G. User dialog state sU t is the concatenation of (I) last system action aS t−1; (II) last user action aU t−1; (III) the goal state gt that represents the remained constraint and request that need to send; (IV) inconsistency vector ct (Kreyssig et al., 2018) that indicates the inconsistency between the systems response and user constraint C. In addition to predicting dialog acts, the user policy outputs terminal signal T at the same time, i.e. µ = µ(aU, T|sU). 3.2 Reward Decomposition On the one hand, the roles between the user agent and the system agent are different. The user agent actively initiates a task and may change it during conversation, but the system agent passively responds to the user agent and returns the proper information, so the reward should be considered separately for each agent. On the other hand, two agents communicate and collaborate to accomplish the same task cooperatively, so the reward also involves a global target for both agents. Therefore, we decompose the mixed reward into three parts according to the characteristic of each component. The reward of each part is explained as follows: System Reward rS t consists of (I) empty dialog act penalty aS t = ∅; (II) late answer penalty if there is a request slot triggered but the system agent does not reply the information immediately; and (III) task success reward based on the user agent’s description. User Reward rU t consists of (I) empty dialog act penalty aU t = ∅; (II) early request penalty if the user agent requests for information when there is still a constraint slot remained to inform; and (III) user goal reward whether the user agents have expressed all the constraints C and requests R. Global Reward rG t consists of (I) efficiency penalty that a small negative value will be given at each dialog turn; (II) sub-goal completion reward once the subtask of G in a particular domain is accomplished; and (III) task success reward based on user goal G. Obviously, each agent should obtain its local reward, and both agents should receive the global reward during the training process. Note that the task success and the user goal reward are only computed at the end of the dialog, and the task success computed in the system reward differs from the one in the global reward. 3.3 Hybrid Value Network The value function aims to estimate the expected return given the current state V (st) = E[Rt] = E[P t′≥t γt′−trt′] so that the policy can directly use the estimated cumulative reward for optimization, without sampling the trajectories to obtain rewards which may cause high variance. Another advantage by applying actor-critic approaches in MARL is that it can integrate with the CTDE framework: the actor of each agent benefits from a critic that is augmented with additional information about the policies of other agents during training. However, a simple centralized critic conditioned on the global state and joint actions cannot well exploit the domain knowledge mentioned above since each part of the overall rewards only depends on a subset of features, e.g. the system reward only depends on the system agent’s behaviors. Inspired by Hybrid Reward Architecture (Van Seijen et al., 2017) that learns a separate Q function, we propose Hybrid Value Network to improve an estimate of the optimal role-aware 629 value function. It first encodes the dialog state of each agent to learn a state representation hS s = tanh(fS s (sS)), hU s = tanh(fU s (sU)), where f(·) can be any neural network unit. The value network V is separated into three branches V S, V U and V G for the value of system rewards, user rewards and global rewards, respectively. V S(sS) = fS(hS s ), V U(sU) = fU(hU s ), V G(s) = fG([hS s ; hU s ]). 3.4 Optimization The action space for the policies can be very large since we deal with multi-domain, complex dialog tasks, which makes it almost impossible for the RL policies to explore and learn from scratch. So the training process can be split into two stages (Fatemi et al., 2016; Takanobu et al., 2019): pretraining the dialog policy with the conversational corpus first and then using RL to improve the pretrained policies. We use β-weighted logistic regression for policy pretraining here to alleviate data bias because each agent only generates several dialog acts in one dialog turn L(X, Y ; β) = −[β · Y T log σ(X) (1) + (I −Y )T log(I −σ(X))], where X is the state and Y is the action from the corpus in this task. As for critic optimization, it aims to minimize the squared error between the temporal difference (TD) target rt + γV (st+1) and the estimated value V (st) = E[rt+γV (st+1)]. Actor-critic algorithms have high variance since the critic is updated too frequently, which has contributed to severe changes in the estimated value, particularly in multi-agent tasks. So we introduce a target network (Mnih et al., 2015) to make the training process more stable. In the context of HVN, it aims to minimize the following loss functions: LS V (θ) = (rS + γV S θ−(s′S) −V S θ (sS))2, LU V (θ) = (rU + γV U θ−(s′U) −V U θ (sU))2, LG V (θ) = (rG + γV G θ−(s′) −V G θ (s))2, LV = LS V + LU V + LG V , (2) Algorithm 1: Multi-Agent Dialog Policy Learning Require :Dialog corpus D with annotations of dialog acts {a} 1 Initialize weights φ, ω for system policy π and user policy µ respectively 2 Pretrain policies π, µ on human conversational data D using Eq. 1 3 Initialize weights θ for hybrid value network V = (V S, V U, V G) and target network θ−←θ 4 foreach training iteration do 5 Initialize user goal and dialog state sU, sS 6 repeat 7 Sample actions aU, aS and terminal signal T using current policy π, µ 8 Execute actions and observe reward rU, rS, rG and new states s′U, s′S 9 Update hybrid value network (critic) using Eq. 2 10 Compute the advantage AU, AS, AG using current value network 11 Update two dialog policies (actor) using Eq. 3 12 sU ←s′U, sS ←s′S 13 Assign target network parameters θ−←θ every C steps 14 until the session ends according to T 15 end where HVN Vθ is parameterized by θ, and θ−is the weight of target network, and the overall loss LV is the sum of value estimation loss on each component reward. Each dialog policy aims to maximize all the related returns, e.g. the system policy π aims to maximize the cumulative system rewards and global rewards E[P t γt(rS t + rG t )]. The advantage A(s) = r + γV (s′) −V (s) estimated by the critic can evaluate the new state s′ and current state s to determine whether the dialog has become better or worse than expected. With the aid of HVN, the sum of the related component advantages can be used to update different agents. By using the loglikelihood ratio trick, the gradients for the system policy and the user policy yield: ∇φJπ(φ)=∇φlogπφ(aS|sS)[AS(sS)+AG(s)], (3) ∇ωJµ(ω)=∇ωlogµω(aU|sU)[AU(sU)+AG(s)], 630 where the system policy πφ is parameterized by φ and the user policy µω by ω. In summary, a brief script for MADPL is shown in Algorithm 1. 4 Experimental Setting 4.1 Dataset MultiWOZ (Budzianowski et al., 2018) is a multidomain, multi-intent task-oriented dialog corpus that contains 7 domains, 13 intents, 25 slot types, 10,483 dialog sessions, and 71,544 dialog turns. During the data collection process, a user is asked to follow a pre-specified user goal, and is allowed to change the goal during the session if necessary, so the collected dialogs are much closer to realworld conversations. The corpus also provides the domain knowledge that defines all the entities and attributes as the external database. 4.2 Metrics Evaluation of a task-oriented dialog system mainly consists of the cost and task success. We count the number of dialog turns to reflect the dialog cost. A user utterance and a subsequent system utterance are regarded as one dialog turn. We utilize two other metrics: inform F1 and match rate to estimate the task success. Both metrics are calculated at the dialog act level. Inform F1 evaluates whether all the requested information has been informed, and match rate checks whether the booked entities match all the indicated constraints given by the user. The overall task success is reached if and only if both inform recall and match rate are 1. 4.3 Baselines We compare MADPL with a series of baselines that involve both system policy learning and user policy learning. Note that we do not consider any other approaches that use a user simulator for policy training because our motivation is to avoid explicitly modeling a simulator. SL Supervised Imitation Learning directly uses the dialog act annotations and trains the agents simply by behavior cloning using Eq. 1, which is the same as the pretraining phase in MADPL. The following three baselines are all RL algorithms that start from the pretrained policy: RL Independent Reinforcement Learning learns only one dialog policy by fixing another agent following the single RL setting, and the reward for Class Attraction Hospital Hotel Count 320 22 389 Police Restaurant Taxi Train 22 457 164 421 Num. Single Two Three Count 328 549 123 Table 1: Domain distribution of user goals used in the automatic evaluation. A user goal with multiple domains is counted repeatedly for each domain. the agent is the sum of role-specific reward and global reward. For example, the user policy uses the reward r = rU + rG at each dialog turn. CRL Centralized Reinforcement Learning is a MARL approach that uses a single centralized critic on the sum of reward r = rU + rS + rG to train two agents simultaneously, which also serves for the ablation test of MADPL. IterDPL Iterative Dialog Policy Learning (Liu and Lane, 2017) updates two agents iteratively using single RL training to reduce the risk of nonstationarity when jointly training the two agents. 5 Automatic Evaluation 5.1 Interaction between Two Agents A set of 1,000 user goals are used for automatic evaluation as shown in Table 1. When the dialog is launched, two agents interact with each other around a given user goal. The performance of interaction between the two trained policies are shown in Table 2. MADPL reaches the highest match rate and task success among all the methods. It manages to improve the success rate of the pretrained policies from 49.7% to 70.1%. Single RL policies (row 2 to 4) have limited improvement, and even decline in match rate since they assume a stationary environment. The comparison between CRL and IterDPL indicates the effectiveness of CTDE in the multi-agent task. The superiority of MADPL against CRL shows that two agents benefit from the role-aware reward decomposition in HVN. The learning curves in Fig. 3 illustrates that the success rate grows rapidly in MADPL, and it always improves the success rate as the training proceeds. The average reward of each component reward is shown in 4. We run 10 different instances of MADPL with different random seeds. The solid curves correspond to the mean and the shaded region to the standard deviation of rewards over the 631 System User Turns Inform Match Success SL SL 6.34 73.08 82.58 49.7 SL RL 8.75 76.86 76.28 60.2 RL SL 6.20 72.84 79.15 51.1 RL RL 7.92 75.96 70.37 58.7 CRL 8.13 68.29 89.71 66.6 IterDPL 8.79 74.01 81.04 64.6 MADPL 8.96 76.26 90.98 70.1 Table 2: Performance of the interaction between the user agent and the system agent. Figure 3: Learning curves of the interaction between the user agent and the system agent. 10 trials. We can observe that all the rewards increase steadily during the training process, which implies that HVN has estimated a proper return for policy training. 5.2 Interaction with Benchmark Policies It is essential to evaluate a multi-agent dialog system whether all the agents understand the semantic interaction rather than invent an uninterpretable language (Kottur et al., 2017; Lee et al., 2019a). To this end, we use two benchmark policies in the standardized task-oriented dialog system platform Convlab (Lee et al., 2019b) to examine all the methods. Each benchmark is a strong rule-based system policy or user policy at the dialog act level, which is used as the simulated evaluation in the DSTC-8 Track 1 competition and show a high correlation with real user interaction (Li et al., 2020). The trained system/user policy in each method is directly deployed to interact with the benchmark user/system policy during the test without any other finetuning, which can be regarded as a weakly zeroshot experiment. The same goal set in Table 1 is used here. Table 3 and Fig. 5 show the results of the interacFigure 4: Learning curves of MADPL on system reward (top), user reward (middle) and global reward (bottom). tion between the benchmark user policy and the system agent of each model. The SOTA performance from GDPL (Takanobu et al., 2019) that directly trains with benchmark user policy is also presented as the soft performance upper bound. Among all the methods, MADPL has achieved the highest task success and the second-highest match rate. All the methods experience a decline in inform F1 after the RL training. Fig. 5 also shows that the success rate is unstable during training. This is because the action space of the system policy is much larger, thus more challenging to learn. In spite of that, the success rate of MADPL shows a rising trend. Table 4 and Fig. 6 show the results of the interaction between the user agent of each method and the benchmark system policy. Among all the methods, MADPL has achieved the highest inform F1 and task success. Though CRL improves the performance at the beginning, the success rate fails to increase further afterwards, while MADPL continues to improve all the time. This also indirectly indicates the advantage of using role-aware reward decomposition in HVN. 632 System Turns Inform Match Success SL 7.76 83.33 85.84 84.2 RL 7.53 82.06 85.77 84.3 CRL 8.38 72.43 89.48 86.4 IterDPL 7.74 79.68 82.49 82.5 MADPL 7.63 79.93 89.24 87.7 GDPL 7.62 92.10 91.50 92.1 Table 3: Performance of the interaction between the benchmark user policy and each system agent. Figure 5: Learning curves of the interaction between the benchmark user policy and each system agent. User Turns Inform Match Success SL 8.64 78.64 87.84 51.7 RL 11.18 85.69 92.13 77.2 CRL 11.31 86.58 92.89 74.7 IterDPL 12.53 84.68 92.57 75.5 MADPL 13.25 87.04 90.81 83.7 Table 4: Performance of the interaction between each user agent and the benchmark system policy. Figure 6: Learning curves of the interaction between each user agent and the benchmark system policy. In summary, each policy trained from MADPL can interact well with the benchmark policy, which VS. System Q User Q Success W D L W D L W D L SL/SL 55 22 23 61 25 14 68 26 6 RL/RL 49 23 28 52 28 20 70 19 11 IterDPL 50 27 23 56 30 14 64 24 12 Table 5: Human preference on dialog session pairs that MADPL wins (W), draws with (D) or loses to (L) baselines with regard to quality (Q) and success by majority voting. implies that MADPL learns a reasonable dialog strategy. 5.3 Goal across Multiple Domains We also investigate the domains in the user goals to observe the scalability of each method in the complex tasks. 200 goals are randomly sampled under each setting. Fig. 7 presents the results of the interaction between two agents in different numbers or classes of domains. The success rate decreases substantially as the number of domains increases in the goal. When there are 3 domains in the goal, RL/RL gets a high inform F1 but a low match rate, IterDPL gets a high match rate but a low inform F1, while MADPL can still keep a high inform F1 and match rate, and obtains the highest task success. In terms of the class of domains, there are 7/10/6 informable slots that needs to be tracked in the Restaurant/Hotel/Train domain respectively. Among these, MADPL outperforms other baselines in the Restaurant and Hotel domains, and performs comparably in the Train domain. In brief, all the results indicate that MADPL has good scalability in multi-domain dialog. 6 Human Evaluation For human evaluation, we hire Amazon Mechanical Turkers to conduct pairwise comparison between MADPL and baselines. Since all the policies work at the dialog act level, we generate the texts from dialog acts using hand-crafted templates to make the dialog readable. Each Turker is asked to read a user goal first, then we show 2 dialog sessions around this user goal, one from MADPL and the other from another baseline. We randomly sample 100 goals for each baseline. For each goal, 5 Turkers are asked to judge which dialog is better (win, draw or lose) according to different subjective assessments independently: (I) system quality, (II) user quality, 633 Figure 7: Performance of dialog agents according to the different number (left) or class (right) of domains in the dialog. and (III) task success. The system quality metric evaluates whether the system policy provides the user with the required information efficiently, and the user quality metric evaluates whether the user policy expresses the constraints completely in an organized way. Note that we do not evaluate the quality of language generation here. Table 5 shows the results of human preference by majority voting. We can observe that the high win rate of MADPL on the task success is consistent with the results of automatic evaluation, and MADPL outperforms three baselines significantly in all aspects (sign test, p-value < 0.01) except for the system quality against RL/RL policies. The proportion of the pairwise annotations in which at least 3 of 5 annotators assign the same label to a task is 78.7%/77.3%/83.3% for system quality/user quality/task success, respectively. This indicates that annotators have moderate agreements. The human judgements align well with the results of automatic evaluation, which also indicates the reliability of the metrics used in task-oriented dialog. 7 Conclusion We present a multi-agent dialog policy algorithm, MADPL, that trains the user policy and the system policy simultaneously. It uses the actor-critic framework to facilitate pretraining and bootstrap RL training in multi-domain task-oriented dialog. We also introduce role-aware reward decomposition to integrate the task knowledge into the algorithm. MADPL enables the developers to set up a dialog system rapidly from scratch. It only requires the annotation of dialog acts in the corpus for pretraining and does not need to build a user simulator explicitly beforehand. Extensive experiments1 demonstrate the effectiveness, reasonableness and scalability of MADPL. As future work, we will apply MADPL in the more complex dialogs and verify the role-aware reward decomposition in other dialog scenarios. Acknowledgement This work was jointly supported by the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096), and the National Key R&D Program of China (Grant No. 2018YFC0830200). We would like to thank THUNUS NExT Joint-Lab for the support. The code is available at https://github.com/ truthless11/MADPL. 1We provide implementation details and case studies in appendix. 634 References Andrew G Barto, Richard S Sutton, and Charles W Anderson. 1983. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE transactions on systems, man, and cybernetics, 13(5):834–846. Daniel S Bernstein, Robert Givan, Neil Immerman, and Shlomo Zilberstein. 2002. The complexity of decentralized control of markov decision processes. Mathematics of operations research, 27(4):819–840. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I˜nigo Casanueva, Stefan Ultes, Osman Ramadan, and Milica Gaˇsi´c. 2018. Multiwoz: A largescale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling. In 2018 Conference on Empirical Methods in Natural Language Processing, pages 5016–5026. Abhishek Das, Satwik Kottur, Jos´e MF Moura, Stefan Lee, and Dhruv Batra. 2017. Learning cooperative visual dialog agents with deep reinforcement learning. In 2017 IEEE International Conference on Computer Vision, pages 2951–2960. Layla El Asri, Jing He, and Kaheer Suleman. 2016. A sequence-to-sequence model for user simulation in spoken dialogue systems. 17th Annual Conference of the International Speech Communication Association, pages 1151–1155. Mehdi Fatemi, Layla El Asri, Hannes Schulz, Jing He, and Kaheer Suleman. 2016. Policy networks with two-stage training for dialogue systems. In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 101–110. Jun Feng, Heng Li, Minlie Huang, Shichen Liu, Wenwu Ou, Zhirong Wang, and Xiaoyan Zhu. 2018. Learning to collaborate: Multi-scenario ranking via multi-agent reinforcement learning. In 27th International Conference on World Wide Web, pages 1939– 1948. Jakob N Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli, and Shimon Whiteson. 2018. Counterfactual multi-agent policy gradients. In 32nd AAAI Conference on Artificial Intelligence, pages 2974–2982. Kallirroi Georgila, Claire Nelson, and David Traum. 2014. Single-agent vs. multi-agent techniques for concurrent reinforcement learning of negotiation dialogue policies. In 52nd Annual Meeting of the Association for Computational Linguistics, pages 500– 510. Izzeddin G¨ur, Dilek Hakkani-T¨ur, Gokhan T¨ur, and Pararth Shah. 2018. User modeling for task oriented dialogues. In 2018 IEEE Spoken Language Technology Workshop, pages 900–906. He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. 2018. Decoupling strategy and generation in negotiation dialogues. In 2018 Conference on Empirical Methods in Natural Language Processing, pages 2333–2343. Simon Keizer, Milica Gaˇsi´c, Filip Jurˇc´ıˇcek, Franc¸ois Mairesse, Blaise Thomson, Kai Yu, and Steve Young. 2010. Parameter estimation for agendabased user simulation. In 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 116–123. Satwik Kottur, Jos´e Moura, Stefan Lee, and Dhruv Batra. 2017. Natural language does not emerge naturally in multi-agent dialog. In 2017 Conference on Empirical Methods in Natural Language Processing, pages 2962–2967. Florian Kreyssig, I˜nigo Casanueva, Paweł Budzianowski, and Milica Gasic. 2018. Neural user simulation for corpus-based policy optimisation of spoken dialogue systems. In 19th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 60–69. Jason Lee, Kyunghyun Cho, and Douwe Kiela. 2019a. Countering language drift via visual grounding. In 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, pages 4376–4386. Sungjin Lee, Qi Zhu, Ryuichi Takanobu, Zheng Zhang, Yaoqin Zhang, Xiang Li, Jinchao Li, Baolin Peng, Xiujun Li, Minlie Huang, and Jianfeng Gao. 2019b. Convlab: Multi-domain end-to-end dialog system platform. In 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 64–69. Mike Lewis, Denis Yarats, Yann Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-toend learning of negotiation dialogues. In 2017 Conference on Empirical Methods in Natural Language Processing, pages 2443–2453. Jinchao Li, Baolin Peng, Sungjin Lee, Jianfeng Gao, Ryuichi Takanobu, Qi Zhu, Minlie Huang, Hannes Schulz, Adam Atkinson, and Mahmoud Adada. 2020. Results of the multi-domain task-completion dialog challenge. In 34th AAAI Conference on Artificial Intelligence, Eighth Dialog System Technology Challenge Workshop. Bing Liu and Ian Lane. 2017. Iterative policy learning in end-to-end trainable task-oriented neural dialog models. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop, pages 482–489. Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, and Igor Mordatch. 2017. Multi-agent actor-critic for mixed cooperative-competitive environments. In 31st Annual Conference on Neural Information Processing Systems, pages 6379–6390. 635 Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. 2015. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533. Alexandros Papangelis, Yi-Chia Wang, Piero Molino, and Gokhan Tur. 2019. Collaborative multi-agent dialogue model training via reinforcement learning. In 20th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 92–102. Baolin Peng, Xiujun Li, Lihong Li, Jianfeng Gao, Asli Celikyilmaz, Sungjin Lee, and Kam-Fai Wong. 2017. Composite task-completion dialogue policy learning via hierarchical deep reinforcement learning. In 2017 Conference on Empirical Methods in Natural Language Processing, pages 2231–2240. Tabish Rashid, Mikayel Samvelyan, Christian Schroeder Witt, Gregory Farquhar, Jakob Foerster, and Shimon Whiteson. 2018. Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning. In 35th International Conference on Machine Learning, pages 4292–4301. Jost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, and Steve Young. 2007. Agenda-based user simulation for bootstrapping a pomdp dialogue system. In 2007 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 149–152. Pararth Shah, Dilek Hakkani-T¨ur, Bing Liu, and Gokhan T¨ur. 2018. Bootstrapping a neural conversational agent with dialogue self-play, crowdsourcing and on-line reinforcement learning. In 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 41–51. Weiyan Shi, Kun Qian, Xuewei Wang, and Zhou Yu. 2019. How to build user simulators to train rl-based dialog systems. In 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, pages 1990–2000. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George Van Den Driessche, Thore Graepel, and Demis Hassabis. 2017. Mastering the game of go without human knowledge. Nature, 550(7676):354–359. Andreas Stolcke, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3):339–373. Pei-Hao Su, Milica Gaˇsi´c, Nikola Mrkˇsi´c, Lina M Rojas Barahona, Stefan Ultes, David Vandyke, TsungHsien Wen, and Steve Young. 2016. On-line active reward learning for policy optimisation in spoken dialogue systems. In 54th Annual Meeting of the Association for Computational Linguistics, pages 2431– 2441. Shang-Yu Su, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Yun-Nung Chen. 2018. Discriminative deep dyna-q: Robust planning for dialogue policy learning. In 2018 Conference on Empirical Methods in Natural Language Processing, pages 3813–3823. Ryuichi Takanobu, Hanlin Zhu, and Minlie Huang. 2019. Guided dialog policy learning: Reward estimation for multi-domain task-oriented dialog. In 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, pages 100–110. Harm Van Seijen, Mehdi Fatemi, Joshua Romoff, Romain Laroche, Tavian Barnes, and Jeffrey Tsang. 2017. Hybrid reward architecture for reinforcement learning. In 31st Annual Conference on Neural Information Processing Systems, pages 5392–5402. Jason D Williams, Antoine Raux, and Matthew Henderson. 2016. The dialog state tracking challenge series: A review. Dialogue & Discourse, 7(3):4–33. Zhirui Zhang, Xiujun Li, Jianfeng Gao, and Enhong Chen. 2019. Budgeted policy learning for taskoriented dialogue systems. In 57th Annual Meeting of the Association for Computational Linguistics, pages 3742–3751. Tiancheng Zhao, Kaige Xie, and Maxine Eskenazi. 2019. Rethinking action spaces for reinforcement learning in end-to-end dialog agents with latent variable models. In 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1208–1218. 636 A Implementation Details Both the system policy π and the user policy µ are implemented with two hidden layer MLPs. The action space of system policy and user policy is 172 and 80 respectively. For Hybrid Value Network V , all neural network units f(·) are two hidden layer MLPs. The activation function is all Relu for MLPs. We use RMSprop as the optimization algorithm. The batch size is set to 32. The weighted pretraining factor β is 2.5, 4 for the system policy and user policy respectively. The learning rate for two polices is 1e-3 when pretraining. As for RL training, the learning rate is 1e-4, 5e-5 for the system policy and the user policy respectively, and 3e-5 for Hybrid Value Network. The discount factor γ is 0.99, and the target network is updated every C= 400 training iterations. In terms of reward design, the empty action penalty is set to -5, and penalties of other types are set to -1. The sub-goal completion reward is set to 5. The task success and the user goal reward are set to 20 if triggered, otherwise they are set to -5. B Case Study To illustrate the superiority of our model, we show a pair of sample dialog sessions between the user and the system, trained with RL/RL and MADPL, respectively. As shown in Table 6, the user goal contains two domains (i.e., train and restaurant). The user agent is supposed to inform the constraints displayed in Info, request information displayed in Reqt, and express the intent to book the train and restaurant with the information in Book. The system agent is required to answer the request, as well as book the train and the restaurant according to the user’s demands. Table 7 shows the sampled dialog sessions. For RL/RL, the user sometimes talks about both the train and the restaurant simultaneously, making it difficult for the system to respond (highlighted in red color). Moreover, the system agent in RL/RL sometimes fails to answer the requested information or book the restaurant as the user demands (highlighted in blue color). As a result, the dialog in RL/RL fails as the user does not request the arrival time of the train, and the system does not book the restaurant for the user. In comparison, both the user and the system in MADPL fulfill their own task, and the whole dialog is successful. Table 8 shows the same sample dialog sessions with the original dialog acts. 637 Train Restaurant Info Reqt Book Info Reqt Book destination=peterborough duration people= 3 food=european phone time=19:30 day=thursday arriveBy pricerange=expensive postcode people=5 departure=cambridge address day=tuesday leaveAt=19:45 Table 6: User goal used in the case study. RL/RL MADPL U: Hi, I’d like to take a train from cambridge to peterborough on thursday. I want to leave at about 19:45. U: Hi, I’d like to take a train from cambridge to peterborough on thursday. I want to leave at about 19:45. S: You may consider TR7324, which leaves at 21:40. Shall I book tickets for you? S: You may consider TR7324, which leaves at 21:40, and arrives at cambridge at 24:23. Shall I book tickets for you? U: Sounds good! Please book tickets to peterborough for 3 people. U: Sounds good! Please book tickets for 3 people. S: I’ve booked your train tickets. The price is 10.10 pounds and your reference number is train-00002321. Is there anything else I can help you with today? S: I ’ve booked your train tickets. The price is 10.10 pounds and your reference number is train-00002321. Is there anything else I can help you with today? U: Could you please tell me the train ID and travel time? I also want to book an european restaurant with expensive price. U: I also want to book an european restaurant with expensive price on Tuesday. S: There are 4 choices for expensive european restaurant. What area of town would you prefer ? S: There are 4 choices found. City Stop Restaurant is a good choice. U: I don’t care about the location. Could you please tell me the address, phone number and post of the restaurant? U: I don’t care its name, but make sure it’s an expensive restaurant. Can you help me with a reservation for 5 people at 19:30 this coming Tuesday? S: Sure, City Stop Restaurant is located at Cambridge City Football Club Milton Road Chesterton and the post is cb41uy. The phone number is 01223363270. Is there anything else I can help you with today? S: Booking was successful. Reference number is: restaurant-00000083. U: Can you help me with a reservation for 5 people at 19:30 this coming Tuesday? U: Although I don’t care its name, I’d like to know its phone number and post. S: Is there anything else I can help you with? S: Sure, the phone number is 01223368786 and the post is cb23rh. Anything else? U: Could you please tell me the travel time of the train I just booked? U: What’s the address of the restaurant? S: The travel time is 60 minutes. Anything else I can look up for you? S: The restaurant is located at Cambridge Lodge Hotel 139 Huntingdon Road City Centre. Is there anything else I can help you with? U: Thanks! When will the train arrive? U: Could you please tell me the travel time of the train I just booked? Thanks very much! S: You’re welcome! Anything else? Bye! S: The travel time is 105 minutes. Anything else I can look up for you? U: Thank you very much! U: Thank you. That ’s all I needed . S: You are very welcome. Goodbye. S: You are very welcome. Goodbye. Failure: User does not request the arrive time of the train, and the system does not book the restaurant for the user. Success Table 7: A sample dialog session comparison between RL/RL and MADPL in natural language. 638 RL/RL MADPL U: {’train-inform-dest’: ’peterborough’, ’train-informday’: ’thursday’, ’train-inform-depart’: ’cambridge’, ’train-inform-leave’: ’19:45’} U: {’train-inform-dest’: ’peterborough’, ’train-informday’: ’thursday’, ’train-inform-depart’: ’cambridge’, ’train-inform-leave’: ’19:45’} S: {’train-inform-leave’: ’21:40’, ’train-inform-id’: ’TR7324’, ’train-offerbook-none’: ’none’} S: {’train-inform-leave’: ’21:40’, ’train-inform-id’: ’TR7324’, ’train-inform-arrive’: ’24:23’, ’trainofferbook-none’: ’none’, ’train-inform-dest’: ’cambridge’} U: {’train-inform-dest’: ’peterborough’, ’train-informpeople’: ’3’} U: {’train-inform-people’: ’3’} S: {’general-reqmore-none’: ’none’, ’trainofferbooked-ref’: ’train-00002321’, ’trainofferbooked-ticket’: ’10.10 pounds’} S: {’general-reqmore-none’: ’none’, ’trainofferbooked-ref’: ’train-00002321’, ’trainofferbooked-ticket’: ’10.10 pounds’} U: {’restaurant-inform-food’: ’european’, ’restaurantinform-price’: ’expensive’, ’train-request-time’: ’?’, ’train-request-id’: ’?’} U: {’restaurant-inform-food’: ’european’, ’restaurantinform-price’: ’expensive’, ’restaurant-inform-day’: ’tuesday’} S: {’restaurant-inform-choice’: ’4’, ’restaurant-informfood’: ’european’, ’restaurant-inform-price’: ’expensive’, ’restaurant-request-area-?’: ’?’} S: {’restaurant-inform-choice’: ’4’, ’restaurantrecommend-name’: ’city stop restaurant’} U: {’restaurant-inform-area’: ’dont care’, ’restaurantrequest-addr’: ’?’, ’restaurant-request-phone’: ’?’, ’restaurant-request-post’: ’?’} U: {’restaurant-inform-price’: ’expensive’, ’restaurantinform-time’: ’19:30’, ’restaurant-inform-day’: ’tuesday’, ’restaurant-inform-people’: ’5’, ’restaurant-inform-name’: ’dont care’} S: {’general-reqmore-none’: ’none’, ’restaurantinform-name’: ’city stop restaurant’, ’restaurantinform-addr’: ’Cambridge City Football Club Milton Road Chesterton’, ’restaurant-recommendname’: ’city stop restaurant’, ’restaurant-informphone’: ’01223363270’, ’restaurant-inform-post’: ’cb41uy’} S: {’general-reqmore-none’: ’none’, ’booking-bookref’: ’restaurant-00000083’} U: {’restaurant-inform-time’: ’19:30’, ’restaurantinform-day’: ’tuesday’, ’restaurant-inform-people’: ’5’} U: {’restaurant-inform-name’: ’dont care’, ’restaurantrequest-phone’: ’?’, ’restaurant-request-post’: ’?’} S: {’general-reqmore-none’: ’none’} S: {’general-reqmore-none’: ’none’, ’restaurantinform-phone’: ’01223368786’, ’restaurant-informpost’: ’cb23rh’} U: {’train-request-time’: ’?’} U: {’restaurant-request-addr’: ’?’} S: {’general-reqmore-none’: ’none’, ’train-informtime’: ’60 minutes’} S: {’general-reqmore-none’: ’none’, ’restaurantinform-addr’: ’Cambridge Lodge Hotel 139 Huntingdon Road City Centre’} U: {’general-thank-none’: ’none’, ’train-requestarrive’: ’?’} U: {’general-thank-none’: ’none’, ’train-request-time’: ’?’} S: {’general-reqmore-none’: ’none’, ’general-byenone’: ’none’, ’general-welcome-none’: ’none’} S: {’general-reqmore-none’: ’none’, ’train-informtime’: ’105 minutes’} U: {’general-thank-none’: ’none’} U: {’general-thank-none’: ’none’} S: {’general-bye-none’: ’none’, ’general-welcomenone’: ’none’} S: {’general-bye-none’: ’none’, ’general-welcomenone’: ’none’} Failure: User does not request the arrive time of the train, and the system does not book the restaurant for the user. Success Table 8: A sample dialog session comparison between RL/RL and MADPL in dialog acts.
2020
59
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6600–6610 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6600 Evaluating and Enhancing the Robustness of Neural Network-based Dependency Parsing Models with Adversarial Examples Xiaoqing Zheng1,2∗, Jiehang Zeng1,2∗, Yi Zhou1,2∗, Cho-Jui Hsieh3, Minhao Cheng3, Xuanjing Huang1,2 1School of Computer Science, Fudan University, Shanghai, China 2Shanghai Key Laboratory of Intelligent Information Processing 3Department of Computer Science, University of California, Los Angeles, USA {zhengxq, jhzeng18, yizhou17}@fudan.edu.cn {chohsieh, mhcheng}@cs.ucla.edu, [email protected] Abstract Despite achieving prominent performance on many important tasks, it has been reported that neural networks are vulnerable to adversarial examples. Previously studies along this line mainly focused on semantic tasks such as sentiment analysis, question answering and reading comprehension. In this study, we show that adversarial examples also exist in dependency parsing: we propose two approaches to study where and how parsers make mistakes by searching over perturbations to existing texts at sentence and phrase levels, and design algorithms to construct such examples in both of the black-box and white-box settings. Our experiments with one of state-of-the-art parsers on the English Penn Treebank (PTB) show that up to 77% of input examples admit adversarial perturbations, and we also show that the robustness of parsing models can be improved by crafting high-quality adversaries and including them in the training stage, while suffering little to no performance drop on the clean input data. 1 Introduction Deep neural network-based machine learning (ML) models are powerful but vulnerable to adversarial examples. Adversarial examples also yield broader insights into the targeted models by exposing them to such maliciously crafted examples. The introduction of the adversarial example and training ushered in a new era to understand and improve the ML models, and has received significant attention recently (Szegedy et al., 2013; Goodfellow et al., 2015; Moosavi-Dezfooli et al., 2016; Papernot et al., 2016b; Carlini and Wagner, 2017; Yuan et al., 2019; Eykholt et al., 2018; Xu et al., 2019). Even though generating adversarial examples for texts has proven to be a more challenging task ∗These authors contributed equally to this work. The DT link NN between IN the DT futures NNS and CC stock NN markets NNS ripped VBD apart RB . . det advmod prep det cc nn conj pobj punct nsubj The DT link NN between IN the DT futures NNS and CC exchange NN markets NNS ripped VBD apart RB . . det advmod prep cc conj punct nn det pobj nsubj Figure 1: Sentence-level attack: An adversarial example (bottom) for the output (top) of a deep neural dependency parser (Dozat and Manning, 2017). Replacing a word “stock” with an adversarially-chosen word “exchange” in the sentence causes the parser to make four mistakes (blue, dashed) in arc prediction. The adversarial example preserves the original syntactic structures, and the substitute word is assigned to the same part of speech (POS) as the replaced one. The assigned POS tags (blue) are listed below the words. than for images and audios due to their discrete nature, a few methods have been proposed to generate adversarial text examples and reveal the vulnerability of deep neural networks in natural language processing (NLP) tasks including reading comprehension (Jia and Liang, 2017), text classification (Samanta and Mehta, 2017; Wong, 2017; Liang et al., 2018; Alzantot et al., 2018), machine translation (Zhao et al., 2018; Ebrahimi et al., 2018; Cheng et al., 2018) and dialogue systems (Cheng et al., 2019). These recent methods attack text examples mainly by replacing, scrambling, and erasing characters or words or other language units under certain semantics-preserving constraints. Although adversarial examples have been studied recently for NLP tasks, previous work almost exclusively focused on semantic tasks, where the attacks aim to alter the semantic prediction of ML models (e.g., sentiment prediction or question answering) without changing the meaning of original texts. To the best of our knowledge, adversarial 6601 examples to syntactic tasks, such as dependency parsing, have not been studied in the literature. Motivated by this, we take the neural network-based dependency parsing algorithms as targeted models and aim to answer the following questions: Can we construct syntactic adversarial examples to fool a dependency parser without changing the original syntactic structure? And can we make dependency parsers robust with respect to these attacks? To answer these questions, we propose two approaches to study where and how parsers make mistakes by searching over perturbations to existing texts at sentence and phrase (corresponding to subtrees in a parse tree) levels. For the sentencelevel attack, we modify an input sentence to fool a dependency parser while such modification should be syntactically imperceptible to humans (see Figure 1). Any new error (excluding the arcs directly connected to the modified parts) made by the parser is accounted as a successful attack. For the phrase-level (or subtree-level) attack, we choose two phrases from a sentence, which are separated by at least k words (say k ≥0), and modify one phrase to cause the parser’s prediction errors in another target phrase (see Figure 2). Unlike the sentence-level attack, any error occurred outside the target subtree is not considered as a successful attacking trial. It helps us to investigate whether an error in one part of a parse tree may exert longrange influence, and cause cascading errors (Ng and Curran, 2015). We study the sentence-level and subtree-level attacks both in white-box and black-box settings. In the former setting, an attacker can access to the model’s architecture and parameters while it is not allowed in the latter one. Our contributions are summarized as follows: (1) we explore the feasibility of generating syntactic adversarial sentence examples to cause a dependency parser to make mistakes without altering the original syntactic structures; (2) we propose two approaches to construct the syntactic adversarial examples by searching over perturbations to existing texts at sentence and phrase levels in both the blackbox and white-box settings; (3) our experiments with a close to state-of-the-art parser on the English Penn Treebank show that up to 77% of input examples admit adversarial perturbations, and moreover that robustness and generalization of parsing models can be improved by adversarial training with the proposed attacks. The source code is available at (https://github.com/zjiehang/DPAttack). buy VBP , , traders NNS or CC sell VBP in IN program NN a DT stock-index NN arbitrage NN sell NN baskets NNS big JJ of IN stocks NNS and CC offset VBP trade NN the DT in IN futures NNS . . lock VB to TO in IN difference NN a DT price NN Subtree to be modified Target subtree A example sentence: In a stock-index arbitrage sell program, traders buy or sell big baskets of stocks and offset the trade in futures to lock in a price difference. Figure 2: Phrase-level attack: two separate subtrees in a parse tree are selected, and one of them (left) is deliberately modified to cause a parser to make incorrect arc prediction for another target subtree (right). For example, we can make a neural dependency parser (Dozat and Manning, 2017) to attach the word “difference” in the target subtree to its sibling “in” instead of the correct head “lock” (the subtree’s root) by maliciously manipulating the selected leftmost subtree only. 2 Related Work Generating adversarial examples – inputs intentionally crafted to fool a model – has become an important means of exploring model vulnerabilities. Furthermore, adding adversarial examples in the training stage, also known as adversarial training, has become one of the most promising ways to improve model’s robustness. Although there is limited literature available for NLP adversarial examples, some studies have been conducted on NLP tasks such as reading comprehension (Jia and Liang, 2017), text classification (Samanta and Mehta, 2017; Wong, 2017; Liang et al., 2018; Alzantot et al., 2018), machine translation (Zhao et al., 2018; Ebrahimi et al., 2018; Cheng et al., 2018), and dialogue systems (Cheng et al., 2019). Depending on the degree of access to the target model, adversarial examples can be constructed two different settings: white-box and black-box settings (Xu et al., 2019; Wang et al., 2019). In the white-box setting, an adversary can access the model’s architecture, parameters and input feature representations while not in the black-box one. The white-box attacks normally yield a higher success rate because the knowledge of target models can be used to guide the generation of adversarial examples. However, the black-box attacks do not require access to target models, making them more practicable for many real-world attacks. Such attacks also can be divided into targeted and non-targeted ones depending on the purpose of adversary. Our phrase-level attack can be viewed as a targeted at6602 tack towards a specific subtree while the sentencelevel attack can be taken as a non-targeted one. For text data, input sentences can be manipulated at character (Ebrahimi et al., 2018), sememe (the minimum semantic units) (Zang et al., 2019), or word (Samanta and Mehta, 2017; Alzantot et al., 2018) levels by replacement, alteration (e.g. deliberately introducing typos or misspellings), swap, insertion, erasure, or directly making small perturbations to their feature embeddings. Generally, we would like to ensure that the crafted adversarial examples are sufficiently similar to their original ones, and these modifications should be made within semantics-preserving constraints. Such semantic similarity constraints are usually defined based on Cosine similarity (Wong, 2017; Barham and Feizi, 2019; Jin et al., 2019; Ribeiro et al., 2018) or edit distance (Gao et al., 2018). Text adversarial example generation usually involves two steps: determine an important position (or token) to change; modify it slightly to maximize the model’s prediction error. This two-step can be repeated iteratively until the model’s prediction changes or certain stopping criteria are reached. Many methods have been proposed to determine the important positions by random selection (Alzantot et al., 2018), trial-and-error testing at each possible point (Kuleshov et al., 2018), analyzing the effects on the model of masking various parts of a input text (Samanta and Mehta, 2017; Gao et al., 2018; Jin et al., 2019; Yang et al., 2018), comparing their attention scores (Hsieh et al., 2019), or gradient-guided optimization methods (Ebrahimi et al., 2018; Lei et al., 2019; Wallace et al., 2019; Barham and Feizi, 2019). After the important positions are identified, the most popular way to alter text examples is to replace the characters or words at selected positions with similar substitutes. Such substitutes can be chosen from nearest neighbours in an embedding space (Alzantot et al., 2018; Kuleshov et al., 2018; Jin et al., 2019; Barham and Feizi, 2019), synonyms in a prepared dictionary (Samanta and Mehta, 2017; Hsieh et al., 2019), visually similar alternatives like typos (Samanta and Mehta, 2017; Ebrahimi et al., 2018; Liang et al., 2018) or Internet slang and trademark logos (Eger et al., 2019), paraphrases (Lei et al., 2019) or even randomly selected ones (Gao et al., 2018). Given an input instance, Zhao et al. (2018) proposed to search for adversaries in the neighborhood of its corresponding representation in latent space by sampling within a range that is recursively tightened. Jia and Liang (2017) tried to insert few distraction sentences generated by a simple set of rules into text examples to mislead a reading comprehension system. 3 Preliminary Dependency parsing is the task of constructing a parse tree of a sentence that represents its syntactic structure and defines the relationships between “head” words and dependent ones, which modify their heads (see the arcs in Figure 1). In this section, we first describe a graph-based dependency parsing method, and then formally present the adversarial attack problem of dependency parsing. 3.1 Dependency Parsing Graph-based parsing models learn the parameters to score correct dependency subgraphs over incorrect ones, typically by factoring the graphs directed edges (or arcs), and performs parsing by searching the highest-scoring graph for a given sentence. Given a sentence x, we denote the set of all valid parse trees that can be constructed from x as Y(x). Assume that there exists a graph scoring function s, the dependency parsing problem can be formulated as finding the highest scoring directed spanning tree for the sentence x. y∗(x) = argmax ˆy∈Y(x) s(x, ˆy; θ) (1) where y∗(x) is the parse tree with the highest score, and θ are all the parameters used to calculate the scores. Given a sentence x[1:n] that is a sequence of n words xi, 1 ≤i ≤n, the score of a graph is usually factorized into the sum of its arc scores to make the search tractable (McDonald et al., 2005). s(x, ˆy; θ) = X (xh,xm)∈A(ˆy) s(xh, xm; θ) (2) where A(ˆy) represents a set of directed edges in the parse tree ˆy. The score of an arc (xh, xm) represents the likelihood of creating a dependency from head xh to modifier xm in a dependency tree. 3.2 Problem Definition A neural network can be considered as a mapping f : X →Y from an input x ∈X to a output y ∈Y with parameters θ. For classification problems, y is a label which lies in some finite set of categories. For the dependency parsing, y is one of valid parses that can be built from x. The model f maps x to y∗ with the highest score, as defined in Equation (1). 6603 Given the original input x, adversarial examples are crafted to cause an ML model to misbehave. Following the common definition in previous papers (e.g., Kuleshov et al., (2018)), for a model f, we say x′ is a good adversarial example of x for untargeted attack if f(x′) ̸= y, c(x, x′) ≤ϵ (3) where y is the truth output for x. For targeted attack the goal is to turn f(x′) into a particular targeted class, denoted by y′, under the same constraint in (3). The constraint function c : X × X →Rg + and a vector of bounds ϵ ∈Rg(g ≥1) reflect the notion of the “imperceptibility” of perturbation to ensure that the true label of x′ should be the same as x. In the context of image classification, popular choices of such constraint include ℓ0, ℓ2 and ℓ∞distances. For natural language tasks, x and x′ are sentences composed with discrete words, and previous methods often define c to measure the semantic similarity between them, and thus x, x′ should have the same semantic meaning while being predicted differently using model f. In this paper, we consider the syntactic similarity and propose various ways to define such constraint for the dependency parsing task (see Section 4). Generating adversarial examples can be formulated as an optimization problem of maximizing the probability of f(x′) ̸= y by choosing x′ for x subject to c(x, x′) ≤ϵ. Algorithms for solving this problem include fast gradient sign method (Goodfellow et al., 2015), iterative methods based on constrained gradient descent (Papernot et al., 2016a), GAN-based strategy (Wong, 2017), genetic algorithms (Alzantot et al., 2018), and submodular set function maximization (Lei et al., 2019). 4 Method Adversarial examples are required to maintain the original functionality of the input. In the adversarial NLP literature, previous studies often expect the adversarial examples to retain the same or similar semantic meaning as the original one (Samanta and Mehta, 2017; Wong, 2017; Alzantot et al., 2018; Zhao et al., 2018; Zang et al., 2019). However, in this paper we focus on the dependency parsing task, which focuses on predicting the syntactic structure of input sentences. Therefore, to expose regions of the input space where the dependency parsers perform poorly, we would like the modified examples x′ to preserve the same syntactic structure as the original x, but slightly relax the constraint on their similarity in semantic properties. A robust parser should perform consistently well on the sentences that share the same syntactic properties, while differ in their meaning. For example, substituting the word “black” for “white”, or “dog” for “cat” are acceptable replacements because they are grammatically imperceptible to humans. 4.1 Adversarial Examples for Parsing We craft the adversarial examples mainly by replacing few words in an input sentence with carefully selected ones. To preserve the same syntactic structure as the original sentence x, we impose the following three constraints that should be satisfied by the word replacement when generating the adversarial examples x′: (i) The substitute word x′ i should fit in well with the context, and can maintain both the semantic and syntactic coherence. (ii) For any word xi in an original example, the word x′ i to replace xi must have the same partof-speech (POS) as xi. (iii) Pronouns, articles, conjunctions, numerals, interjections, interrogative determiners, and punctuations are not allowed to be replaced1. To select a substitute word that agrees well with the context of a sentence, we use the BERT (Devlin et al., 2019) to generate a set of candidate words that are suitable to replace the original word thanks to its bidirectional language model that is capable of capturing the wider context of the entire sentence2. Words that are assigned to the same POS generally have similar grammatical properties and display similar syntactic behavior. To enforce the second constraint, we require that the substitute x′ i should be assigned to the same part of speech as xi by a POS tagger like (Samanta and Mehta, 2017; Ebrahimi et al., 2018). We filter out the aforementioned words in the third constraint. We adopt the following two-step procedure for generating text adversarial examples: choose weak spots (or positions) to change, and then modify them to maximize the model’s error. In the blackbox setting, we first identify the weak spots of an 1We exclude those words from being replaced because either there are very limited number of substitutes available, or such replacements easily lead to syntactic inconsistency. 2We also tried to replace words with their nearest neighbors in the vector space of pre-trained word embeddings such as GloVe (Pennington et al., 2014). However, our preliminary experiments show that these nearest neighbors cannot fit well with the context in many cases since the neighboring words are retrieved without taking the specific context into account. 6604 input sentence with the greedy search strategy by replacing each word, one at a time, with a special “unknown” symbol (<unk>), and examining the changes in unlabeled attachment score (UAS) like (Yang et al., 2018; Gao et al., 2018; Hsieh et al., 2019). For each identified weak spot, we replace it with a word in the candidate set proposed by the BERT to form an attack. We select the substitute word that causes the greatest decrease in UAS while satisfying the aforementioned constraints to construct the adversarial example. This process is repeated until all candidate words are exhausted and every weak spot is tested (see Figure 3). In the white-box setting, full access to the target model’s parameters and features enables us to launch a “surgical” attack by crafting more accurate adversarial examples. We propose a scoring function to determine which parts are more vulnerable to adversarial attacks for an input sentence x of n words xi (1 ≤i ≤n) as follows. F(x, θ) = n X m=1 max[s(xh, xm; θ) −max j̸=h s(xj, xm; θ), −ε] S(xi, θ) = ∂F(x, θ) ∂exi 2 (4) where θ are all the parameters of a target dependency parser, exi is the embedding of word xi, and ε ≥0 denotes a confidence margin. A larger ε will lead to a more confident output and a higher success rate, but with the cost of more iterations. The function F(x, θ) sums up all the differences between the score of any ground truth arc (xh, xm) and that of the incorrect, but the highest scoring one with the same dependant xm. Generally speaking, the greater the value of this function is, the harder we can find adversarial examples for the input x because it has a larger margin between the true parse tree and any incorrect one. Minimizing this function maximizes the probability of causing the parser to misbehave. We determine the importance of words by their values of S(xi, θ), namely the norm of the partial derivative of the function F(x, θ) with respect to the word xi. The key idea is that we use the magnitude of the gradient to decide which words to attack. Assuming we have a set of candidate words Cxi, we select the optimal one x∗ i by: x∗ i = argmin w∈Cxi ew −  exi − α S(xi, θ) ∂F(x, θ) ∂exi  2 (5) where the coefficient α governs the relative importance of the normalized gradient term. We want the selected word as close to the replaced one xi as possible in their embedding space according to the Euclidean distance, where the embedding of xi is updated in the opposite direction of the gradient at the rate of α. Such replacement will lead to a decrease in the value of the function F(x, θ). Our algorithm of generating adversarial examples for dependency parsing in the white-box setting is shown in Figure 4. Inputs: x[1:n]: an input sentence of n words xi, 1 ≤i ≤n. f: a target parser. γ: the maximum percentage of words that can be modified. ψ: the size of the set of candidate words. Output: an adversarial example x′ of x. Algorithm: 1: κ = γ·n (the maximum number of words to be modified) 2: for each word xi except those listed in the constraint (iii) 3: ˆxi = replace xi with a special symbol “<unk>” in x; 4: calculate the unlabeled attachment score of f(ˆxi). 5: sort ˆxi by their UAS, and append the top-κ positions into an ordered index list [1 : κ]; 6: for each position j in the list [1 : κ] 7: generate a set of ψ candidate words Cj by BERT; 8: remove the words from Cj if they do not have the same part-of-speech as the xj; 9: select the word x∗ j ∈Cj that causes the greatest decrease in UAS if we replace xj with x∗ j in x; 10: x′ = replace xj with the word x∗ j in x. 11: return x′. Figure 3: Adversarial example generation algorithm for dependency parsing in the black-box setting. 4.2 Sentence-level and Phrase-level Attacks For the sentence-level attack, we simply use the algorithms listed in Figure 3 and 4 to form a attack. For the phrase-level attack, we first choose two phrases (corresponding to two subtrees in a parse) from a sentence, which do not overlap each other and are separated by at least k words. Then, we try to cause the parser to make mistakes in a target subtree by modifying another one. Unlike the sentence-level attack, any error occurred outside the target subtree will not be counted as a successful trial. Note that even if we can force the parser to change its prediction on the head of the target subtree’s root, it is still not considered as a successful attack because the changed edge connects a certain word outside the subtree. We require that all the subtrees should contain 4 to 12 words3, and the source subtree to be modified 3A subtree-level attack can be launched on a sentence if it has at least two subtrees. We ensure that there are enough sen6605 and its target share no word in common. Depending on the purpose of the adversary, adversarial attacks can be divided into two categories: targeted attack and non-targeted attack. The subtree-level attack can be viewed as a targeted attack while the sentence-level attack as a non-targeted one. A small subtree can be taken as a relatively independent structure. If a parser is robust enough, it should always give the consistent result for a target subtree even when there are some errors in another source subtree that does not overlap with the target one. Therefore, we relax some constraints in the cases of the phrase-level attacks, and allow the words in the source tree to be replaced with any word in the vocabulary if the number of modified words is no more than a given value. With the help of these adversarial examples, we can investigate whether an error in one part of a parse tree may exert long-range influence, and successfully cause cascading errors. In the black-box setting, we first collect all the subtrees from an input sentence, and then perform trial-and-error testing with every source-target pair. For each pair, we try to modify the source subtree up to κ words (say κ = 3) by replacing them with other randomly selected words. This process is repeated until a pair is found where the UAS of the target subtree decreases. In the white-box setting, we can obtain a function as F(x, θ) in Equation (4) for every possible target subtree (excluding its root), and then calculate a score for each source-target pair as follows. S(x[s], x[t], θ) = X xi∈x[s] ∂F(x[t], θ) ∂exi 2 (6) where x[s] denotes a source subtree, and x[t] a target one. Such scores can be used to rank the sourcetarget pairs for their potential to deliver a successful attack. Generally, the greater the score is, the more vulnerable the target subtree is to the source one. If we remove the sum from the right hand side of (6), we can obtain the norm of the partial derivative of the function F(x[t], θ) with respect to each word xi in the source subtree, which helps us to determine which words have higher priority to be changed. For an input sentence, we successively take one from the list of the source-target pairs in the order of their scores. For each pair, we simultaneously tence examples for the experiment. According to our statistics on the English PTB test set, 35.14% sentences have two such subtrees, 17.18% have three, and 8.98% have four or more. Inputs: x[1:n]: an input sentence of n words xi, 1 ≤i ≤n. f: a target parser. γ: the maximum percentage of words that can be modified. ψ: the size of the set of candidate words. ξ: the maximum number of trials. Output: an adversarial example x′ of x. Algorithm: 1: κ = γ·n (the maximum number of words to be modified) 2: while no decrease of UAS in the latest ξ trials do 3: select the word xi to be replaced as Equation (4); 4: if the number of words to replace is greater than κ then break; 5: generate a set of ψ candidate words Ci by BERT; 6: remove the words from Ci if they do not have the same part-of-speech as the xi; 7: choose the word x∗ i ∈Ci to replace xi as Equation (5); 8: x′ = replace xi with the word x∗ i in x. 9: return x′. Figure 4: Adversarial example generation algorithm for dependency parsing in the white-box setting. replace three words in the source subtree guided by their gradients as Equation (5). More than one word are replaced at each iteration to avoid getting stuck in a local optimum. This two-step procedure is repeated until the parser’s prediction changes. 5 Experiments We first describe the target parser as well as its three variants, evaluation dataset, and hyper-parameter settings. We then report the empirical results of the proposed adversarial attacking and training. We also list some adversarial examples generated by our attacking algorithms in Table 5. 5.1 Target Parser and Its Variants We choose the graph-based dependency parser proposed by Dozat and Manning (2017) as our target model. This well-known parser achieved 95.7% unlabeled attachment scores (UAS) and 94.1% labeled attachment scores (LAS) on English PTB dataset and close to state-of-the-art performance on standard treebanks for five other different natural languages (Buchholz and Marsi, 2006). Specifically, Dozat and Manning (2017) extends bidirectional LSTM-based approach (Kiperwasser and Goldberg, 2016) with biaffine classifiers to predict arcs and labels. They presented two variants of their model: one takes only words as input, and the other takes both the words and their POS tags. Moreover, we use the Stanford POS tagger (Toutanova et al., 2003) to generate the POS tag for each word. In addition to these two, we add a new 6606 Model Max% Word-based Word + POS Character-based UAS #Word Succ% UAS #Word Succ% UAS #Word Succ% Clean −− 95.52 −− −− 95.58 −− −− 95.73 −− −− Black-box 5% 90.91 0.99 42% 90.87 1.00 41% 91.18 1.09 37% 10% 89.38 1.52 49% 90.20 1.54 43% 88.49 1.99 51% 15% 88.69 2.23 51% 89.86 2.24 44% 85.89 3.08 60% White-box 5% 87.80 0.60 55% 89.76 0.50 46% 90.37 0.40 37% 10% 83.73 1.50 68% 86.36 1.40 61% 86.58 1.20 54% 15% 80.35 2.40 77% 83.75 2.10 69% 83.25 1.90 64% Table 1: Results of sentence-level adversarial attacks on a state-of-the-art parser with the English Penn Treebank in both the black-box and white-box settings. “Word-based”, “Word + POS”, and “Character-based” denote three variants of the model (Dozat and Manning, 2017) with differences in their input forms. “Max%” denotes the maximum percentage of words that are allowed to be modified, “UAS” unlabeled attachment scores, “#Word” the average number of words actually modified, and “Succ%” the success rate in terms of the number of sentences. Model Word-based Word + POS Character-based Original Adv [b] Adv [w] Original Adv [b] Adv [w] Original Adv [b] Adv [w] Clean 95.52 95.59 95.16 95.58 95.53 95.05 95.73 95.55 95.34 Attack [b] 88.69 90.03 89.98 89.86 91.86 91.60 85.89 92.93 89.89 Attack [w] 80.35 80.82 88.87 83.75 84.89 90.32 83.25 84.10 86.56 Table 2: Performance of adversarial training. “Clean” stands for the testing results on the clean data, and “Attack [b]” and “Attack [w]” respectively denote the accuracy under test-time attacks in the black-box ([b]) and white-box ([w]) settings. “Original” and “Adv” denotes the testing and adversarial accuracy of the models without and with the adversarial training. variant that takes characters as inputs, and uses a bidirectional LSTM to generate word representations from the character embeddings. Model POS Word-based Word + POS ∆UAS Succ% ∆UAS Succ% JJ −1.89 23% −1.13 17% BlackNN −2.00 24% −1.25 20% box RB −3.13 37% −2.43 31% VB −7.42 48% −6.17 41% IN −11.10 67% −9.22 62% JJ −4.48 37% −2.23 25% WhiteNN −10.53 65% −8.33 57% box RB −4.09 40% −3.14 35% VB −13.36 73% −10.51 63% IN −15.58 87% −13.24 85% Table 3: The attack success rate and the corresponding changes in UAS by modifying the words with different part-of-speech. “JJ” denotes adjective, “NN” noun, “RB” adverb, “VB” verb, and “IN” preposition. 5.2 Datasets and Hyper-parameter Settings We evaluate our methods on the English Penn Treebank (PTB), converted into Stanford dependencies using version 3.3.0 of the Stanford dependency converter (de Marneffe et al., 2006)4. We follow the standard PTB split, using section 2-21 for training, section 22 for development and 23 for testing. 4We ask for the copula (linking verbs) to remain the head when its complement is an adjective or noun. For the target parsing models, we use the same choice of hyperparameters as (Dozat and Manning, 2017): 100-dimensional uncased word embeddings and POS tag vectors; three bi-directional LSTM layers (400 dimensions in each direction); and 500and 100-dimensional ReLU MLP layers for arc and label predictions respectively. For the characterbased variant, we use 100-dimensional character vectors, and 200-dimensional LSTM. The other hyper-parameters were tuned with the PTB 3.3.0 development set by trying only a few different settings. In the following experiments, the maximum size of candidate words ψ was set to 50, the coefficient α in Equation (5) to 15, and the maximum number of trials to 40. For each example, we terminate the trials immediately if the drop in UAS is more than 30% in the white-box setting. 5.3 Results of the Sentence-level Attacks We now report the empirical studies of adversarial attacks for sentence-level methods. In Table 1, we present both clean accuracy and accuracy under attacks on PTB with the three variants of the parsing model (Dozat and Manning, 2017), where we allow three different, 5%, 10% and 15% word replacements. A success rate is defined as the number of sentences successfully modified (causing the model to make errors) divided by all the number of sen6607 tences to be attempted. The results show that the proposed attacks are effective. With less than two words perturbed on average, our white-box attack can consistently achieve > 60% success rate. We also observe that the word-based model is most vulnerable to the adversarial examples among the three variants. Its performance drops 15.17% in UAS, and 77% sentence examples admit the adversarial perturbations under the white-box attack with 15% word replacement. The model taking the words and their POS tags as input (“Word + POS”) seems to be more robust against adversarial examples in both settings. One reasonable explanation is that we require the substitute words to have the same part-of-speech as the original ones, and the model can produce more consistent results with the help of the POS tags. The white-box attacks are clearly much more effective than the black-box ones across the three variants of the parsing model and different word replacement rates. Despite having high success rates, we want to know whether the generated examples are syntactically faithful to and coherent with the original sentences. To evaluate the quality of these adversarial examples, we randomly collect 100 sentences and their adversarial examples each generated in the black-box and white-box settings, and presented them to three human evaluators. The evaluators were asked to examine whether each generated example still preserve the original syntactic structure. We adopted a majority vote for the results, and found that 80% examples generated in the whitebox setting and 75% in the black-box setting are considered unchanged in their syntactic structures. The three human evaluators are postgraduate students with at least three years of research experience in syntactic parsing. Those three annotators’ pairwise-agreement percentages are 90%, 82%, and 82% for the adversarial examples generated in the white-box setting, and 93%, 85%, 84% for those generated in the black-box setting. Their average Kappa coefficients are 53.8% (white-box), and 67.3% (black-box) respectively. In Table 5, we listed five sentences and their adversarial examples generated by our algorithms each in the black-box and white-box settings, which were randomly extracted from the PTB test set. We would like to know which type of words to modify is most likely to form a successful attack like (Hashemi and Hwa, 2016). In this experiment, we only allowed to replace the words belonging to one part of speech, and also tried to generate adversarial examples by replacing prepositions, which is forbidden in the above experiments. It can be seen from Table 3 that the following dependencies especially suffer: prepositional, verbal and adverbial phrases. Not surprisingly most of the errors occur with structures which are inherently hard to attach in the dependency parsing. Model Word-based Word + POS k ≥0 k ≥1 k ≥0 k ≥1 Black-box 34.73% 21.72% 19.61% 10.06% White-box 40.06% 28.66% 25.35% 15.82% Table 4: The success rate of the phrase-level attacks. 5.4 Results of the Phrase-level Attacks For the phrase-level attacks, we aim to study whether changes in a source subtree can alter the prediction on another target subtree (see an illustration in Figure 2). We tried two different settings: one asks for the source and target subtrees to be separated by at least one word (k ≥1), and another only requires those two subtrees do not overlap with each other (k ≥0). In the case of k ≥0, we can find 1420 sentence examples from the test set, while for k ≥1, there are 1340 valid examples that can be used to deliver phrase-level attacks (there are 2416 sentences in total in the PTB test set). Note that all the subtrees should contain 4 to 12 words. For each source-target pair, we allow to modify the source subtree up to 3 words. For some sentences, their adversarial examples can be generated by replacing just one or two words. The success rate for the phrase-level attacks is defined as the ratio between the number of the sentences where there is at least one source-target subtree pair, such that a modification in the source subtree causes the model to make errors in the target subtree, and the number of the sentences that contain at least one source-target subtree pair, regardless of whether the model is caused to make an error or not. It can be seen from Table 4 that with only three words perturbed, the proposed white-box attack can achieve 27.47% success rate on average for all the settings. The white-box attacks are again much more effective, and spend less than 50% of the time to find the most vulnerable pairs than the black-box ones. Like the sentence-level attacks, verbal and prepositional phrases have been shown to be more susceptible to such attacks. 6608 " We 're are after a little bigger niche , " he said . Looking ahead to other big commodity markets this week . The centers normally usually are closed through the weekend . But at least most part of the increase could have come from higher prices , analysts said . Posted yields on 30 year mortgage commitments for delivery within 30 years days priced at par . But his release within the next few months is widely highly excepted . The most popular such shows appeals focus on narrow national concerns . Size Breadth and weight considerations also have limited screen displays . Columbia savings officials were not available last for comment on the downgrade . That would be the lowest worst level since the early 1970s . Figure 5: Five adversarial examples each generated by our algorithms in the black-box (top) and white-box (bottom) settings. These adversarial examples were randomly extracted from the test set of the English Penn Treebank (PTB). The original words are highlighted in bold blue font while the substitute words are highlighted in bold green ones. The incorrect arcs (i.e. head-modifier pairs) predicted by the target parser are indicated by dash arrows while the ground truth arcs are indicated by solid arrows. 5.5 Adversarial Training We also investigated whether our adversarial examples can aid in improving model robustness. We randomly selected 50% of the training data and generated adversarial examples from them using the algorithms listed in Figure 3 and 4. We merged these adversarial examples with the original training set. Some previous studies show that the models tend to overfit the adversarial examples, and their performance on the clean data will drop if too many adversarial examples are used. Therefore, we used a similar training strategy. The testing and adversarial performance with and without adversarial training are listed in Table 2. Under all circumstances, adversarial training improved the generalization of the models and made them less vulnerable to the attacks, while suffering little to no loss in on the clean data. For example, 88.69 (column 1, row 2) is the accuracy achieved by the original model on the adversarial examples generated in the black-box setting, 90.03 (column 2, row 2) and 89.98 (column 3, row 2) are the accuracy achieved on the perturbed test data with the test-time adversarial attacks by the models with the adversarial training. It is clear that the robustness of parsing models was improved by the adversarial training. Furthermore, from the first row of Table 2 these robust models suffer from little to no performance drop on the clean testing data. 6 Conclusion In this paper, we study the robustness of neural network-based dependency parsing models. To the best of our knowledge, adversarial examples to syntactic tasks, such as dependency parsing, have not been explored in the literature. We develop the first adversarial attack algorithms for this task to successfully find the blind spots of parsers with high success rates. Furthermore, by applying adversarial training using the proposed attacks, we are able to significantly improve the robustness of dependency parsers without sacrificing their performance on clean data. Acknowledgements The authors would like to thank the anonymous reviewers for their valuable comments. This work was supported by National Key R&D Program of China (No. 2018YFC0830902), Shanghai Municipal Science and Technology Major Project (No. 2018SHZDZX01) and Zhangjiang Lab. 6609 References Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Samuel Barham and Soheil Feizi. 2019. Interpretable adversarial training for text. Computing Research Repository, arXiv: 1905.12864. Sabine Buchholz and Erwin Marsi. 2006. CoNLLX shared task on multilingual dependency parsing. In Proceedings of the International Conference on Computational Natural Language Learning. Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In Proceedings of the IEEE Symposium on Security and Privacy. Minhao Cheng, Wei Wei, and Cho-Jui Hsieh. 2019. Evaluating and enhancing the robustness of dialogue systems: A case study on a negotiation agent. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. 2018. Seq2Sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples. Computing Research Repository, arXiv: 1803.01128. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In Proceedings of the International Conference on Learning Representations. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial examples for text classification. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Steffen Eger, Gozde Gul Sahin, Andreas Ruckl´e, JiUng Lee, Claudia Schulz, Mohsen Mesgar, Krishnkant Swarnkar, Edwin Simpson, and Iryna Gurevych. 2019. Text processing like humans do: Visually attacking and shielding NLP systems. In Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2018. Robust physical-world attacks on deep learning models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. Computing Research Repository, arXiv: 1801.04354. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In Proceedings of the International Conference on Learning Representations. Homa B. Hashemi and Rebecca Hwa. 2016. An evaluation of parser robustness for ungrammatical sentences. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Yu-Lun Hsieh, Minhao Cheng, Da-Cheng Juan, Wei Wei, Wen-Lian Hsu, and Cho-Jui Hsieh. 2019. On the robustness of self-attentive models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is BERT really robust? a strong baseline for natural language attack on text classification and entailment. Computing Research Repository, arXiv: 1907.11932. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313–327. Volodymyr Kuleshov, Shantanu Thakoor, Tingfung Lau, and Stefano Ermon. 2018. Adversarial examples for natural language classification problems. OpenReview Submission, id: r1QZ3zbAZ. Qi Lei, Lingfei Wu, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, and Michael Witbrock. 2019. Discrete adversarial attacks and submodular optimization with applications to text classification. In Proceedings of the Conference on Systems and Machine Learning. Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2018. Deep text classification can be fooled. In Proceedings of the International Joint Conference on Artificial Intelligence. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the International Conference on Language Resources and Evaluation. 6610 Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. DeepFool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Dominick Ng and James R. Curran. 2015. Identifying cascading errors using constraints in dependency parsing. In Proceedings of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing. Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016a. The limitations of deep learning in adversarial settings. In Proceedings of the IEEE European Symposium on Security and Privacy. Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016b. Distillation as a defense to adversarial perturbations against deep neural networks. In Proceedings of the IEEE Symposium on Security and Privacy. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging NLP models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Suranjana Samanta and Sameep Mehta. 2017. Towards crafting text adversarial samples. Computing Research Repository, arXiv: 1707.02812. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. Computing Research Repository, arXiv: 1312.6199. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial triggers for attacking and analyzing NLP. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the International Joint Conference on Natural Language Processing. Wenqi Wang, Lina Wang, Benxiao Tang, Run Wang, and Aoshuang Ye. 2019. A survey: Towards a robust deep neural network in text domain. Computing Research Repository, arXiv: 1902.07285. Catherine Wong. 2017. DANCin SEQ2SEQ: Fooling text classifiers with adversarial text example generation. Computing Research Repository, arXiv: 1712.05419. Han Xu, Yao Ma, Haochen Liu, Debayan Deb, Hui Liu, Jiliang Tang, and Anil K. Jain. 2019. Adversarial attacks and defenses in images, graphs and text: A review. Computing Research Repository, arXiv: 1909.08072. Puyudi Yang, Jianbo Chen, Cho-Jui Hsieh, Jane-Ling Wang, and Michael I. Jordan. 2018. Greedy attack and gumbel attack: Generating adversarial examples for discrete data. Computing Research Repository, arXiv: 1805.12316. Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. 2019. Adversarial examples: Attacks and defenses for deep learning. IEEE transactions on neural networks and learning systems, 30(9):2805–2824. Yuan Zang, Chenghao Yang, Fanchao Qi, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2019. Textual adversarial attack as combinatorial optimization. Computing Research Repository, arXiv: 1910.12196. Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2018. Generating natural adversarial examples. In Proceedings of the International Conference on Learning Representations.
2020
590
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6611–6628 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6611 Exploiting Syntactic Structure for Better Language Modeling: A Syntactic Distance Approach Wenyu Du1,2∗, Zhouhan Lin3,4∗, Yikang Shen3,4, Timothy J. O’Donnell3,5,6, Yoshua Bengio3,4 and Yue Zhang1,2 1School of Engineering, Westlake University 2Institute of Advanced Technology, Westlake Institute for Advanced Study 3Mila 4Universit´e de Montr´eal 5Department of Linguistics, McGill University 6Canada CIFAR AI Chair Abstract It is commonly believed that knowledge of syntactic structure should improve language modeling. However, effectively and computationally efficiently incorporating syntactic structure into neural language models has been a challenging topic. In this paper, we make use of a multi-task objective, i.e., the models simultaneously predict words as well as ground truth parse trees in a form called “syntactic distances”, where information between these two separate objectives shares the same intermediate representation. Experimental results on the Penn Treebank and Chinese Treebank datasets show that when ground truth parse trees are provided as additional training signals, the model is able to achieve lower perplexity and induce trees with better quality. 1 Introduction It is widely believed in linguistics, cognitive science, and computational linguistics that the latent structure underlying how words combine to form sentences is best represented as a tree structure. The study of the computational mechanisms and systems of constraints that characterize such derivations or parse trees is a central question in these fields (Pollard and Sag, 1994; Steedman and Baldridge, 2011; Huddleston and Pullum, 2002; Adger, 2003; Bresnan, 2001; Chomsky, 1995; Sag et al., 2003). Using syntactic information for the language modeling task has been a popular research topic since the 1990s. Early efforts included various approaches that attempted to incorporate shallow syntactic information such as POS tags (Heeman and Allen, 1997; Srinivas, 1996) as well as a more complete structures (Wright et al., 1994; Jurafsky et al., 1995). Most of such work falls under the topic of structured language modeling (Chelba and ∗Equal contribution. Jelinek, 2000; Van Uytsel et al., 2001; Xu et al., 2002). With the resurgence of neural network approaches, sequential, large-scale neural language models have been shown to significantly outperform traditional language models (Merity et al., 2017; Yang et al., 2018) without using syntactic structural information. On another scenario, recent analysis also reveals that state-of-the-art sequential neural language models still fail to learn certain long-range syntactic dependencies (Kuncoro et al., 2018). Thus it is an interesting problem to explore the relation between language models and syntax and investigate whether syntax can be integrated to enhance neural language models. To this end, two main lines of work have been investigated, namely transition-based and distancebased methods, respectively. The former strand of work has sought to jointly train a transition-based parser (Nivre, 2008; Zhang and Nivre, 2011; Andor et al., 2016) with a language model using a linearized structured sentence. For example, recurrent neural network grammars (RNNGs) model the joint probability of both words and trees by training a generative, top-down parser (Dyer et al., 2016; Cheng et al., 2017). Subsequent work (Kim et al., 2019b) has developed an unsupervised variant of RNNGs based on an expectation maximization algorithm, which enables the system to be used as a language model without access to parser data. The second strand of work designs language models that are constrained using syntactic constituents induced using the notion of syntactic distance (Shen et al., 2017, 2018). The distances are a sequence of scalars between consecutive words, which are higher when there is a higher level of constituent boundary between the corresponding pair of words. While aligning nicely with the sequential nature of language models, syntactic distances can be transformed into syntactic tree structures with simple principles (Shen et al., 2017). 6612 The major difference between the above two strands of work is that the former focuses more on parsing performance while the latter aligns better to language model settings. There are three main benefits of the syntactic distance approach. First, typical engineering tricks for language modeling such as batching and regularization (Merity et al., 2017) can be directly used. Second, unlike transition-based methods, which requires to model each sentence independently, distance-based models allow direct comparison with mainstream prior work on language modeling (Gal and Ghahramani, 2016; Merity et al., 2017; Yang et al., 2018) on the same datasets, which carry information across sentence boundaries. Third, there is no risk of compounding errors as compared to the transitionbased approach. However, unlike for transitionbased approaches (Kim et al., 2019b), for distancebased approaches there have been no studies examining the relationship between induced syntactic structure and human labeled syntactic structure, or whether human labeled syntactic trees can be used to improve language modeling (Dyer et al., 2016; Kim et al., 2019b). To this end, we investigate distance-based language models with explicit supervision. In particular, we inject syntactic tree supervision into distance-based neural language models by breaking a syntactic tree into a label sequence, and extending a distance-based language model to include a multi-task objective that also learns to predict goldstandard labels. We choose the Ordered-Neuron LSTM (ON-LSTM) (Shen et al., 2018) as our baseline model, which gives the best results among distance-based models. For making fair comparison with the dominant methods on language modeling, we also manually extend the most commonly-used dataset for evaluating language models, which we name PTB-Concat (Mikolov et al., 2010). It is a version of the Penn Treebank (PTB) (Marcus et al., 1993) dataset with syntactic trees removed, and with preprocessing of numbers, punctuation and singleton words. We add syntactic trees, thus directly compare distancebased methods with other language models. Experimental results show that incorporating linguistically motivated structures could practically improve language modeling performance. To the best of our knowledge, this is the first work to successfully incorporate gold-standard syntactic trees into syntactic distance based language models. Additional experiments suggest that the level of improvement could also be achieved in other language models. Furthermore, analyses of the trees learned by the multi-task models demonstrate that they are different from both gold trees and unsupervisedly learned trees. 1 2 Related Work Using syntactic information for language modeling dates back to the last century. Srinivas (1996) proposed using shallow syntactic structures—so-called “super-tags”—which successfully reduced perplexity by 38% over a tri-gram based word-level language model. More complete parser integration is also explored under the heading of “structured language modeling” (Chelba and Jelinek, 2000). This research covers a wide range of different parsers, albeit mostly with N-gram models (Van Uytsel et al., 2001; Xu et al., 2002). Wright et al. (1994) and Jurafsky et al. (1995) extend bi-gram language models with a context-free grammar. Feed-forward neural language models were also explored (Xu et al., 2003). However, the performance does not approach that of the modern neural LMs. Dyer et al. (2016) first proposed RNNG. Subsequent work extends the model with an encoderdecoder architecture (Cheng et al., 2017), unsupervised learning (Kim et al., 2019b), knowledgedistillation (Kuncoro et al., 2019) and computational psycholinguistics (Hale et al., 2018). Shen et al. (2017) first used syntactic distance to constrain language modeling. Its subsequent work (Shen et al., 2018) transfers the distance notion to LSTM cell. Our work extends distance-based methods in trying to introduce supervised syntax to these models. A very recent work makes use of attention over spans instead of syntactic distance to inject inductive bias to language models (Peng et al., 2019). However, the time complexity of injecting supervision is much higher than distancebased approach (O(n2) VS O(n) ). 3 Model The overall structure of our model is shown in Figure 1. In particular, the ON-LSTM is taken as the base language model, and syntactic trees are added by conversion to distance metrics. The supervised distance values are taken as one additional output, resulting in a multi-view model. 1We release the code at https://github.com/ wenyudu/SDLM. 6613 Linear Layer hwt cumax Lsyd Llm cumax xt ht-1 Linear Layer hft Figure 1: Split-head approach of constructing the two master forget gates in the multi-task setting. 3.1 Ordered Neurons LSTM Ordered Neurons LSTM (ON-LSTM) (Shen et al., 2018) is built upon a vanilla LSTM model (Hochreiter and Schmidhuber, 1997) with two additional gates, namely a master input gate ˜it and a master forget gate ˜ft, each being a vector of the same shape as the LSTM forget and input gates: ft = σ(Wf ◦[xt, ht−1] + bf) (1) it = σ(Wi ◦[xt, ht−1] + bi) (2) ot = σ(Wo ◦[xt, ht−1] + bo) (3) ˆct = tanh(Wc ◦[xt, ht−1] + bc) (4) ˜ft = cumax(W ˜f ◦[xt, ht−1] + b ˜f) (5) ˜it = 1 −cumax(W˜i ◦[xt, ht−1] + b˜i) (6) where cumax is defined as the cumulative sum of softmax outputs, i.e., cumax(·) = cumsum(softmax(·)). The cumax function provides an inductive bias to model hierarchical structures through enforcing units in the master forget gate ˜ft to increase monotonically from 0 to 1 and those in the master input gate ˜it to decrease monotonically from 1 to 0. The two gates are applied on the original input and forget gates as follows: ωt = ˜ft ◦˜it (7) ˆft = ft ◦ωt + ( ˜ft −ωt) = ˜ft ◦(ft ◦˜it + 1 −˜it) (8) ˆit = it ◦ωt + (˜it −ωt) = ˜it ◦(it ◦˜ft + 1 −˜ft) (9) ct = ˆft ◦ct−1 +ˆit ◦ˆct (10) ht = ot ◦tanh(ct). (11) ON-LSTM can learn the implicit structure of a language in the form of a binary tree in an unsupervised manner, through syntactic distances, which are calculated as: dt = Dm − Dm X k=1 ˜ft (12) Figure 2: Binarized grammar tree and its corresponding syntactic distances. The heights of the bars stand for the values of the distances. To convert this tree to syntactic distances, we first assign all the words an initial value of 1, and then the non-leaf nodes are assigned distances in the order of d3 →d2 →d1 →d4, according to the procedures in the second part of Model section. On the other hand, given the distances, the tree can be recovered in a top-down process by setting up the split boundaries in descending order of distances (i.e., d4 →d1 →d2 →d3). Syntactically, a shorter distance between a pair of words indicates a closer relationship between the constituents on the two sides of the distance. Note that since only the relative order of the distances could affect the structure of the trees, valid values of these distances are not unique. where Dm is the size of the hidden state. The syntactic distance dt between two consecutive words is a scalar value, which can be interpreted as reflecting the syntactic relatedness between the constituents before and after time point t. In terms of trees, it can be thought of as the height the lowest tree node that encloses both words. In the case where we consider discrete trees, the height is given by the maximum path length from a leaf. In the more general case, it can be thought of as a scalar value measuring a continuous notion of node height. Figure 2 depicts a sample sentence with its syntactic distances and corresponding tree structures. More generally, the binary tree structure of a sequence with N tokens can be specified with a sequence of N −1 syntactic distances. This definition of distance makes the syntactic distance an ultrametric (Holly, 2001; Wu et al., 1999), a concept which is important in the theory of hierarchical agglomerative clustering (Johnson, 1967) and was first explored in a linguistic setting by Levelt (1974). 3.2 Converting Grammar Trees to Syntactic Distances To integrate treebank trees into ON-LSTM, we need to first convert syntactic trees into a representation based on syntactic distances. Since the original grammar trees are not necessarily binary, 6614 we first split non-binary nodes by adding sentinel intermediate nodes to form a right-branched binary tree, following the steps in Stern et al. (2017). Now for a binary tree with N leaf nodes, we have N −1 non-leaf nodes that correspond to the N −1 slots between each of the adjacent word pairs, each of which are assigned a syntactic distance (Figure 2). The binary tree can thus be represented as a sequence of distances d1, d2, . . . , dN−1. The conversion from binary tree to syntactic distances thus translates to the assigning of a distance value for each of the N −1 non-leaf nodes in the tree. This is achieved in a bottom-up process. We first initialize a distance value of 1 at all of the leaf nodes, and then compute the syntactic distances of the parent nodes by recursively tracing back their parents. More specifically, for a certain parent node, its corresponding syntactic distance dP is computed with respect to the syntactic distances of its children dL and dR, i.e., dP = max{dL, dR} + 1. (13) A more detailed algorithm flowchart of tree-todistance conversion is given in Appendix A.1. 3.3 Auxiliary Syntactic Distance Outputs In ON-LSTM the distances dt’s in Equation 12 are used to infer the structure of grammar trees. Consequently, a straight-forward way to incorporate ground truth parse trees is to use the ground truth distances dg t to guide dt, as depicted in Figure 1. Interestingly, directly forcing the structure inferred by language models to be coherent to linguist-tagged ground truth trees barely improves the language model performance (see Section 6). Instead, we introduce a “split-head” setting, which can practically improve LM performances by learning two sets of closely related syntactic distances. In particular, we use another master forget gate ˜fw t for inferring a set of distances that are trained to align with the gold-standard syntactic distances, while leaving the original distances dt computed from ˜ft intact. To achieve this, we introduce an extra linear layer on top of the hidden states hf t , and from there infer a separate set of master forget gates. In this way, both of the master forget gates ˜ft and ˜fw t share the same input hf t , but optimize two different sets of trees for the language modeling and parsing task, respectively. i.e., hf t = W ˜f ◦[xt, ht−1] + b ˜f (14) ˜ft = cumax(hf t ) (15) ˜fw t = cumax(Ws(hf t ) + bs) (16) The syntactic distances for the auxiliary supervised targets are then calculated as follows: dw t = Dm − Dm X k=1 ˜fw tk (17) where ˜fw tk is the k-th element in the vector ˜fw t 3.4 Grammar Trees as Auxiliary Supervised Targets for Language Modeling With the additional master forget gate ˜fw t , the model has two different sets of predictions. The first set is the language model outputs of ONLSTM, predicting the next words. The second set is the distances calculated in Equation 17. The original language modeling structure of the ONLSTM model is left intact after the modification, so we can continue to use the master forget gate ˜ft to update hidden states and calculate the softmax output in ON-LSTM for the language modeling part. We denote the negative log-likelihood loss in the language model part as Llm. For brevity, we do not discuss the details of the loss. For aligning the syntactic distances, we perform a ranking loss between the learned syntactic distance dw t and ground truth distance dg, which was first proposed by Burges et al. (2005). The goal is to encourage the model to produce the distances that have the same ranking order as the ground truth distances: Lsyd = X i,j>i max(0, (1−sign(dg i −dg j)(dw i −dw j ))). (18) The joint objective function is thus to minimize the following loss: L = Llm + αLsyd (19) where α is the scaling parameter. 4 Datasets We make test datasets in English and Chinese, respectively, both of which have parse trees and also language modeling benchmarks. For English, we use the Penn Treebank (PTB) dataset (Marcus 6615 et al., 1993). Mikolov et al. (2010) have provided a widely accepted version of PTB for language modeling. Several modifications are made to the original treebank. For example, all punctuation symbols are removed, all characters are lower-cased, the vocabulary size is truncated at 10,000 and all sentences are concatenated. However, this version of PTB discards the parse tree structures, which makes it unsuitable for comparing sequential language models with those utilizing tree structures. We refer to this version as PTB-Concat. Dyer et al. (2016) proposed a different version of PTB, which retains the parse tree structures. Sentences are modeled separately, punctuation is retained, and singleton words are replaced with the Berkeley parser’s mapping rules, resulting in much larger vocabulary size, 23,815-word types. Since it retains the parse trees, this dataset enables direct comparison between models that utilize parse trees with those who do not. But unfortunately, since the vocabulary is different from PTB-Concat, and the sentences are processed separately, the results are not directly comparable with those in PTB-Concat, on which most existing work on language modeling reports results. We refer to this version as PTB-Sepsent. As mentioned above, a salient limitation of PTBSepsent is that it does not allow fair comparison with existing LM work on PTB-Concat. To address this issue, we propose a different variation of PTB dataset that both uses the same vocabulary size as PTB-Concat and at the same time retaining the ground-truth grammar trees. We pre-process the PTB dataset by following the same steps indicated by Mikolov et al. (2010) to obtain a modified treebank with the same vocabulary set as PTB-Concat. Sentences are concatenated, and we make sure that the sentences are the same to PTB-Concat, from token to token, in the training, validation, and test sets. This results in the same vocabulary as that of PTB-Concat, which allows us to directly compare models that utilize parse trees with the existing reports of performance on PTB-Concat. We refer to this version of PTB-Concat with syntax as PTB-Concat-Syn and we will cover preprocessing details in Appendix A.3. For Chinese, we use the Chinese Treebank 5.1 (Xue et al., 2005), with the same settings as Kim et al. (2019b). Sentences are modeled separately and singleton words are replaced with a single <UNK> token. It will be referred to as CTBModel Param Dev Test Gal and Ghahramani (2016) - Variational LSTM 66M − 73.4 Kim et al. (2016) - CharCNN 19M − 78.9 Merity et al. (2016) - Pointer Sentinel-LSTM 21M 72.4 70.9 Grave et al. (2016) - LSTM − − 82.3 Zoph and Le (2016) - NAS Cell 54M − 62.4 Zilly et al. (2017) - Variational RHN 23M 67.9 65.4 Shen et al. (2017) - PRPN − − 62.0 Merity et al. (2017) - 3-layer AWD-LSTM 24M 60.0 57.3 Zolna et al. (2018) - Fraternal dropout 24M 58.9 56.8 Shen et al. (2018) - 3-layer ON-LSTM 25M 58.3 56.2 ONLSTM-SYD 25M 57.8 55.7 Yang et al. (2018) - AWD-LSTM-MoS 22M 56.5 54.4 Takase et al. (2018) - AWD-LSTM-DOC 23M 54.1 52.4 Table 1: Various language models evaluated on validation and test sets on PTB-Concat. Our model is denoted as ONLSTMSYD, which incorporates tree structures during training. Yang et al. (2018) and Takase et al. (2018) focus on improving the softmax module of LSTM LM, which are orthogonal to ours. Sepsent in the rest of the paper. 5 Experiments We evaluate the influence of syntactic supervision on distance-based langauge models, especially in terms of its language modeling performance. We are also going to analyze the induced syntax after introducing the structural supervision. In addition, extensive ablation tests are conducted to understand how syntactic supervision affects the langauge model. 5.1 Language Modeling We first compare our models with existing sequential language models on PTB-Concat, and then we compare our model with transition-based language models on PTB-Sepsent and CTB-Sepsent, which have a larger vocabulary and also use additional grammatical structure. Results on PTB-Concat We first validate the benefit of introducing structural signal to neural language models by training our proposed model on PTB-Concat-Syn with structural supervision, and then evaluate them on the plain validation/test set. We compare our model with the original ON-LSTM model, as well as various other strong LSTM language model baselines such as AWD-LSTM (Merity et al., 2017) and a mixture of softmax (Yang et al., 2018). We denote our syntactic-distance-augmented ON-LSTM model as ONLSTM-SYD. For making fair comparison, we closely follow the hyperparameters and regularization of ONLSTM (Shen et al., 2018). The model is a threelayer ONLSTM-SYD language model with an embedding size of 400 and hidden layer units 1150. The dropout rates are 0.5, 0.45, 0.3, 0.45 for the 6616 Model PTBSepsent CTBSepsent Kim et al. (2019b) - RNNLM 93.2 201.3 Kim et al. (2019b) - RNNG 88.7 193.1 Kim et al. (2019b) - URNNG 90.6 195.7 Kim et al. (2019b) - RNNG-URNNG 85.9 181.1 Kim et al. (2019b) - PRPN (default) 126.2 290.9 Kim et al. (2019b) - PRPN (finetuned) 96.7 216.0 ONLSTM-noAWD 69.0 167.7 ONLSTM 60.0 145.7 ONLSTM-SYD-noAWD 67.6 163.1 ONLSTM-SYD 59.6 140.5 Table 2: Language modeling perplexity on PTB-Sepsent and CTB-Sepsent. Kim et al. (2019b) report two results of PRPN, the default one using settings in Shen et al. (2017) and another one finetuned by themselves. Our models use the same hyperparameter settings as in Section 5.1. word vectors, LSTM weight metrics, outputs between LSTM layers and the output of the last layer, respectively. The embedding dropout ratio is 0.125. The model is trained and finetuned for 1000 epochs in total and is switched to the fine-tuning phase at epoch 650. The ground truth syntactic structures are used to supervise the syntactic distances in the third layer of ONLSTM-SYD and the loss raio α is set to 0.75. We use this setting as the default setting for all the experiments. The results are shown in Table 1. After adding structural signals into the model, our model ONLSTM-SYD significantly outperforms the original ON-LSTM model (p-value < 0.05), indicating that incorporating linguist-tagged parse trees can contribute to language modeling positively. Results on PTB-Sepsent and CTB-Sepsent PTB-Sepsent and CTB-Sepsent offer a comparable setting with other structure-aware supervised (Dyer et al., 2016) and unsupervised (Kim et al., 2019b) baselines. The results are listed in Table 2. 2 ONLSTM-SYD performs better than ONLSTM, which indicates that supervised syntactic information can help improve language modeling. The margin between our models and the baselines is rather large. We find that the set of regularization and optimization techniques proposed by Merity et al. (2017) contribute significantly to this margin. Because of the sequential and parallel nature of our model, it can directly inherit and benefit from this set of tricks. In contrast, it is non-trivial to use them for RNNG and URNNG. As a more rigorous analysis, we further conducted a set of experiments without those tricks (i.e. non2We use the preprocessing script in URNNG’s repository https://github.com/harvardnlp/urnng, which merges all UNK types. monotonically triggered ASGD, weight-dropped LSTM, finetuning). The performance (denoted as ONLSTM-SYD-noAWD) drops; however, the model still outperforms the other baselines by a significant margin. 5.2 Structure Analysis In this subsection we analyze the model to see how the additional structural supervision affects the quality of inferred trees. Note that our goal here is to analyze the influence of ground truth syntactic information on the quality of the induced trees rather than to yield a better grammar induction performance, since our model is not strictly comparable to other models due to its extra structural supervision during training. We follow the settings of Htut et al. (2018) to test our model on the WSJ10 and WSJ test sets, reporting the results in Table 3. The WSJ test set has 2416 sentences with arbitrary lengths, while WSJ10 consists of 7422 sentences of the whole WSJ corpora that contain no more than 10 words. We use both biased and unbiased distance-to-tree conversion algorithms for both ON-LSTM and our proposed model (c.f. Appendix A.1 and A.2 for a formal description of the biased and non-biased conversion algorithm). Since our model has two sets of trees learned simultaneously, we list all of them in Table 3. Grammar Induction We can see that the trees learned by the joint loss show improved the F1 score and rely less on the branching bias of the tree constructing algorithm (see Dyer et al. (2019)). The big gap of F1 scores on WSJ between the biased and unbiased trees are altered after introducing the structural loss, and the LM unbiased trees significantly outperforms its baseline ON-LSTM. These indicate that the auxiliary supervised task not only lowers the perplexity, but also improves the qualities of the induced trees for the LM task. Looking more into the trees, we find that compared to ON-LSTM, ONLSTM-SYD improves the label prediction accuracy for NP (noun phrases), VP (verb phrases) and PP (prepositional phrases) but fails to improve ADJP (adjective phrases). This suggests that different types of human-annotated constituents may have different influences on language modeling, or that human-annotated trees are themselves biased to differing degrees between different constituent types. 6617 Training Objective Induction Algorithm Parsing F1 Depth WSJ Accuracy on WSJ by Tag R/L Ratio on WSJ Model WSJ10 WSJ ADJP NP VP PP ON-LSTM LM Unbiased 63.2 39.0 4.9 37.9 42.8 49.6 54.2 1.08 ON-LSTM LM Biased 69.5 44.2 5.5 57.0 53.0 52.4 49.6 2.09 ONLSTM-SYDsyd LM+SYD Unbiased 77.6 61.3 7.3 38.2 73.2 69.6 72.9 2.81 ONLSTM-SYDsyd LM+SYD Biased 65.7 45.5 5.5 30.4 40.6 70.7 43.9 5.07 ONLSTM-SYDlm LM+SYD Unbiased 55.1 34.5 4.8 14.9 42.2 16.7 67.4 0.83 ONLSTM-SYDlm LM+SYD Biased 58.0 36.3 5.3 41.1 53.9 52.4 43.0 1.70 Binary Gold Standard Trees – – 88.1 85.6 6.4 100 100 100 100 2.92 Gold standard Trees – – 100 100 5.0 100 100 100 100 2.22 Random Trees (Htut et al., 2018) – – 32.2 18.6 5.3 17.4 22.3 – 16.0 – Balanced Trees (Htut et al., 2018) – – 43.4 24.5 4.6 22.1 20.2 – 9.3 – Left Branching Trees – – 19.6 9.0 12.4 – – – – – Right Branching Trees – – 56.6 39.8 12.4 – – – – – Table 3: Unlabeled parsing results evaluated on the WSJ10 and the full WSJ test set. Numbers in bold font indicate that they are the best compared to those computed from the other parts of the model (i.e., within the same section in the table). The Algorithm column represents whether bias or unbiased algorithm is performed. ONLSTM-SYDsyd and ONLSTM-SYDlm represent two sets of trees induced from loss Lsyd and Llm respectively. The Accuracy columns represent the fraction of ground truth constituents of a given type that correspond to constituents in the model parses. The R/L Ratio column represents the ratio between the number of words that are left children of its parent, and those that are right children. Branching Bias Syntactic trees of English naturally have a bias towards right branching structures. As shown in the last section of Table 3, right branching trees achieve a much higher F1 score than random, balanced or left branching trees. As pointed out by Dyer et al. (2019), PRPN and ONLSTM resort to a distance-to-tree algorithm with right-branching biases (See Appendix A.2). For our model, a biased distance-to-tree algorithm yields worse results compared to its nonbiased counterpart; but on unsupervised models such as ON-LSTM, biased algorithms yield better results than non-biased versions. This observation indicates that syntactic supervision leads to better tree structures as compared with fully unsupervised tree induction, which is intuitive. Linguistic Analysis Our best parsing results are for trees decoded from the syntactic prediction objective using the unbiased algorithm. Interestingly, these trees tend to be deeper on average than the (binarized) gold standard trees (see Table 3).3 This appears to be driven by a failure of the model to identify constituents centered on deeply-embedded head words—instead, the model prefers right-branching structures. Some examples of trees are displayed in Figure 3. In the top part of the figure, we see the parse produced from the Lsyd distances of our model, in the middle the tree produced the Llm distances and, on the bottom, the gold standard tree. As can be seen in the figure, the Lsyd-based tree is largely right-branching and misses constituents centered on several deeply em3Please refer to Appendix A.5 for visualizations of a more extensive set of sentences. bedded heads, such as the verb said. By contrast, the Llm-based tree is considerably shallower than the gold-standard and consists of a sequence of smaller chunks that often mis-bracket words with respect to the gold-standard constituent boundaries. Figure 4 illustrates these phenomenon in further detail. The plot at the top of the figure shows the proportion of constituents produced from Lsyd distances whose boundaries correspond to a gold constituent, broken down by height of nodes in the predicted tree. As the plot illustrates, the model fares better on relatively small constituents lower in trees, and makes more errors for constituents higher in the tree, reflecting mistakes on deeplyembedded heads. The bottom of the figure shows the same breakdown for Llm-based induced trees. Overall, the affect is similar, although Llm-based trees are shallower than the Lsyd-based trees. We believe the increased accuracy for the longest constituents is driven by the fact that, since the highest constituents cover long sentence spans and there are few possible long spans, these constituents have a higher baseline probability of being correct. It appears that the Lsyd objective has learned a strong right-branching bias, leading to very deep trees (even with the unbiased decoder) whereas the Llm objective appears to be using a kind of predictive chunking of the sentence into small groups of words. It is tempting to speculate that these chunks may correspond to linguistic units used in prosodic planning or by the human sentence processor, while the deeper trees correspond more directly to the compositional structure underlying sentence meaning. We leave exploring this question to future 6618 the company which issued a statement on the agreement late friday said that N million of the payment was previously provided for in its financial statements and that NN will be recognized in its N third-quarter statement the company which issued a statement on the agreement late friday said that N million of the payment was previously provided for in its financial statements and that NN will be recognized in its N third-quarter statement The company which issued a statement on the agreement late Friday said that 1 million of the payment was previously provided for in its financial statements and that 500,000 will be recognized in its 1989 third-quarter statement Figure 3: Trees induced from the syntactic task distances in our model (top), the language modeling task distances (middle) as well as the gold-standard trees (bottom). Figure 4: Accuracy breakdown w.r.t. constituent height in unbiased trees derived from the syntactic task distances in our model (top) and the language modeling distances (bottom). A constituent is considered as correct if its boundaries correspond to a true constituent. The constituents’ heights are those in the predicted tree. Since constituents that represent the whole sentence always have correct boundaries, they are excluded from the calculation. work. Parsing performance Our models give worse unlabeled parsing performance compared to transition-based methods. In particular, Kim et al. (2019a) report that unsupervised URNNG achieves 45.4 WSJ F1 in a similar setting, while another URNNG that finetunes a supervised RNNG model gives a much better F1 of 72.8, leading a 27.4 F1 improvement. In contrast, the F1 of our structure prediction trees is 61.3 in unbiased algorithm. This indicates that our model brings more benefits on the LM side rather than the parsing side. 6 Ablation Study Layer used for supervision Table 4 (Top) shows the performances where the supervised signal is injected into different layers. Although injecting syntax into the last layer gives the best syntactic distance for grammar induction, it fails to achieve a similar improvement on perplexity. This suggests that a better syntactic structure may not always lead to a better language model. The observation is consistent with prior research (Williams et al., 2018). Tree structure We study the influence of the different types of supervised trees to the model. In addition to using the ground truth parse trees, we also tried to train the model with random trees instead, and without providing trees, in which case it degenerates to a vanilla ON-LSTM. From Table 4 (Middle) we can find that without supervision signals from gold standard parse trees the model performs worse than the full model. Random trees introduce noise to the model and downgrade both parsing and LM performance, indicating the importance of injecting meaningful syntax. Multitask variants We also explored injecting the supervised syntactic information at different levels. One straight forward baseline is to add supervision signals directly on the syntactic distance 6619 Ablation Experiment Validation Test WSJ Study Detail PPL PPL F1 Layer for Supervision 1st layer 58.0 55.6 57.7 2nd layer 57.8 55.5 59.7 3rd layer 57.8 55.7 61.3 Tree Structure No Parse Tree 58.3 55.9 39.0 Random Tree 60.2 57.5 32.4 Gold Parse Tree 57.8 55.7 61.3 Multitask Variants Vanilla Multitasking 60.9 58.5 24.9 One set of trees 58.5 55.9 54.4 Two sets of trees 57.8 55.7 61.3 Table 4: Perplexity and unlabeled parsing F1 in ablation studies. We choose unbiased algorithm and the layer with supervision injected. For the unsupervised models, we report the layer with best F1 score. (Top) When supervising on different layers. (Middle) Using different tree structures for supervision. (Bottom) Different multitasking strategies. in ON-LSTM, using one set of trees to guide both LM and parsing, as indicated in the Model section (Table 4 Bottom, one set of trees). Despite injecting stronger syntactic signals, this direct approach does not improve language model perplexity. This also reflects the fact that the most suitable syntactic structures for language modeling do not necessarily conform to human labeled syntax. In addition, we also use ON-LSTM hidden states for supervised syntactic distance prediction (Table 4 Bottom, vanilla multitasking). This approach fails to outperform its ON-LSTM baseline due to the same reason. In summary, there are mutual benefits between induced and supervised syntactic information, although they do not fully overlap. Generalization to other LMs One practical question is whether the improvements found in our work can be generalized to other language models. To answer this question, we introduce the multitask scheme to PRPN (Shen et al., 2017), which is another model that is also able to learn unsupervised structures through language modeling. Similar to ON-LSTM, PRPN is also a syntactic distance method. We modify the PRPN model in the same spirit as in ON-LSTM. In addition, we change the encoding layer and use the output as syntactic distance embeddings lsyd. Then we map lsyd to two sets of syntactic distances dlm and dsyd for language modeling and syntactic distance prediction, respectively. Syntactic supervision comes to dsyd. The model reaches a test perplexity of 60.5 in PTBConcat (p-value < 0.05), which also significantly outperforms the 62.0 from the original model. We refer readers to Appendix A.4 for the details of PRPN and our modified PRPN-SYD. 7 Conclusion We investigated linguistic supervision for distancebased structure-aware language models, showing its strengths over transition-based counterparts in language modeling. Apart from the explicit observations in achieving strong perplexity scores, our model reveals several interesting aspects of the quality of the trees learned by the model. As a byproduct of our investigation, we release a version of PTB-Concat, which contains syntactic structures while at the same time the same pre-processing steps adopted by most previous work on neural language models. Acknowledgments We thank Zhiyang Teng, Qi He and all members at Text Intelligent Lab in Westlake University for insightful discussions. We also would like to thank all anonymous reviewers for their constructive comments. This work is supported by the National Natural Science Foundation of China (NSFC No. 61976180) and the Westlake University and Bright Dream Joint Institute for Intelligent Robotics. The corresponding author is Yue Zhang. References David Adger. 2003. Core Syntax: A Minimalist Perspective. Oxford University Press. Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. arXiv preprint arXiv:1603.06042. Joan Bresnan. 2001. Lexical functional syntax. WileyBlackwell, Oxford. Christopher Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Gregory N Hullender. 2005. Learning to rank using gradient descent. In Proceedings of the 22nd International Conference on Machine learning (ICML-05), pages 89–96. Ciprian Chelba and Frederick Jelinek. 2000. Structured language modeling. Computer Speech & Language, 14(4):283–332. Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016. Long short-term memory-networks for machine reading. arXiv preprint arXiv:1601.06733. Jianpeng Cheng, Adam Lopez, and Mirella Lapata. 2017. A generative parser with a discriminative recognition algorithm. arXiv preprint arXiv:1708.00415. 6620 Noam Chomsky. 1995. The Minimalist Program. The MIT Press, Cambridge, Massachusetts and London, England. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A Smith. 2016. Recurrent neural network grammars. arXiv preprint arXiv:1602.07776. Chris Dyer, G´abor Melis, and Phil Blunsom. 2019. A critical analysis of biased parsers in unsupervised parsing. arXiv preprint arXiv:1909.09428. Yarin Gal and Zoubin Ghahramani. 2016. A theoretically grounded application of dropout in recurrent neural networks. In Advances in neural information processing systems, pages 1019–1027. Edouard Grave, Armand Joulin, and Nicolas Usunier. 2016. Improving neural language models with a continuous cache. arXiv preprint arXiv:1612.04426. John Hale, Chris Dyer, Adhiguna Kuncoro, and Jonathan R Brennan. 2018. Finding syntax in human encephalography with beam search. arXiv preprint arXiv:1806.04127. Peter A Heeman and James F Allen. 1997. Incorporating pos tagging into language modeling. arXiv preprint cmp-lg/9705014. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Jan E Holly. 2001. Pictures of ultrametric spaces, the p-adic numbers, and valued fields. The American Mathematical Monthly, 108(8). Phu Mon Htut, Kyunghyun Cho, and Samuel R Bowman. 2018. Grammar induction with neural language models: An unusual replication. arXiv preprint arXiv:1808.10000. Rodney Huddleston and Geoffrey K. Pullum. 2002. The Cambridge Grammar of English Language. Cambridge University Press, Cambridge. Stephen C Johnson. 1967. Hierarchical clustering schemes. Psychometrika, 32(3). Daniel Jurafsky, Chuck Wooters, Jonathan Segal, Andreas Stolcke, Eric Fosler, Gary Tajchaman, and Nelson Morgan. 1995. Using a stochastic context-free grammar as a language model for speech recognition. In 1995 International Conference on Acoustics, Speech, and Signal Processing, volume 1, pages 189–192. IEEE. Yoon Kim, Chris Dyer, and Alexander M Rush. 2019a. Compound probabilistic context-free grammars for grammar induction. arXiv preprint arXiv:1906.10225. Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neural language models. In Thirtieth AAAI Conference on Artificial Intelligence. Yoon Kim, Alexander M Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and G´abor Melis. 2019b. Unsupervised recurrent neural network grammars. arXiv preprint arXiv:1904.03746. Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, and Phil Blunsom. 2018. Lstms can learn syntax-sensitive dependencies well, but modeling structure makes them better. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426–1436. Adhiguna Kuncoro, Chris Dyer, Laura Rimell, Stephen Clark, and Phil Blunsom. 2019. Scalable syntaxaware language models using knowledge distillation. arXiv preprint arXiv:1906.06438. Willem J. M. Levelt. 1974. Formal Grammars in Linguistics and Psycholinguistics, Volume 3: Psycholinguistic applications. Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017. Regularizing and optimizing lstm language models. arXiv preprint arXiv:1708.02182. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34(4):513–553. Hao Peng, Roy Schwartz, and Noah A Smith. 2019. Palm: A hybrid parser and language model. arXiv preprint arXiv:1909.02134. Carl Pollard and Ivan A Sag. 1994. Head-driven phrase structure grammar. University of Chicago Press. Ivan A. Sag, Thomas Wasow, and Emily M. Bender. 2003. Syntactic Theory: A Formal Introduction. CSLI, Stanford, CA. Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron Courville. 2017. Neural language modeling by jointly learning syntax and lexicon. arXiv preprint arXiv:1711.02013. Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2018. Ordered neurons: Integrating tree structures into recurrent neural networks. arXiv preprint arXiv:1810.09536. 6621 B Srinivas. 1996. ” almost parsing” technique for language modeling. In Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP’96, volume 2, pages 1173–1176. IEEE. Mark Steedman and Jason Baldridge. 2011. Combinatory categorial grammar. Non-Transformational Syntax: Formal and explicit models of grammar, pages 181–224. Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A minimal span-based neural constituency parser. arXiv preprint arXiv:1705.03919. Sho Takase, Jun Suzuki, and Masaaki Nagata. 2018. Direct output connection for a high-rank language model. arXiv preprint arXiv:1808.10143. Dong Hoon Van Uytsel, Filip Van Aelten, and Dirk Van Compernolle. 2001. A structured language model based on context-sensitive probabilistic leftcorner parsing. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, pages 1–8. Association for Computational Linguistics. Adina Williams, Andrew Drozdov*, and Samuel R Bowman. 2018. Do latent tree learning models identify meaningful structure in sentences? Transactions of the Association for Computational Linguistics, 6:253–267. Jerry H Wright, Gareth JF Jones, and Harvey LloydThomas. 1994. A robust language model incorporating a substring parser and extended n-grams. In Proceedings of ICASSP’94. IEEE International Conference on Acoustics, Speech and Signal Processing, volume 1, pages I–361. IEEE. Bang Ye Wu, Kun-Mao Chao, and Chuan Yi Tang. 1999. Approximation and exact algorithms for constructing minimum ultrametric trees from distance matrices. Journal of CO, 3(2). Peng Xu, Ciprian Chelba, and Frederick Jelinek. 2002. A study on richer syntactic dependencies for structured language modeling. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 191–198. Association for Computational Linguistics. Peng Xu, Ahmad Emami, and Frederick Jelinek. 2003. Training connectionist models for the structured language model. In Proceedings of the 2003 conference on Empirical methods in natural language processing, pages 160–167. Association for Computational Linguistics. Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. Natural language engineering, 11(2):207–238. Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W Cohen. 2018. Breaking the softmax bottleneck: A high-rank rnn language model. ICLR. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 188–193. Association for Computational Linguistics. Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutn´ık, and J¨urgen Schmidhuber. 2017. Recurrent highway networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 4189–4198. JMLR. org. Konrad Zolna, Devansh Arpit, Dendi Suhubdy, and Yoshua Bengio. 2018. Fraternal dropout. ICLR. Barret Zoph and Quoc V Le. 2016. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578. 6622 A Appendices A.1 Algorithms for transformation between parse trees and syntactic distances The following tree-to-distance algorithm provides a set of distances given a tree. The node indicates the root node of the given tree. Algorithm 1 Binary Parse Tree to Distance (∪represents the concatenation operator of lists) 1: function TREE2DISTANCE(node) 2: if node is leaf then 3: d ←1 4: else 5: childl, childr ←children of node 6: t2dl ←Tree2Distance(childl) 7: t2dr ←Tree2Distance(childr) 8: d ←max(dl, dr) + 1 9: t2d ←t2dl ∪[d] ∪t2dr 10: end if 11: return t2d, d 12: end function The following distance-to-tree conversion algorithm provides an unbiased reconstruction of tree given a set of distances. Algorithm 2 Distance to Binary Parse Tree 1: function DISTANCE2TREE(d) 2: if d ̸= [] then 3: i ←arg maxi(d) 4: childl ←Distance2Tree(d<i) 5: childr ←Distance2Tree(d≥i) 6: node ←Node(childl, childr) 7: end if 8: return node 9: end function A.2 Distance-to-tree algorithm with right-branching bias Algorithm 3 Distance to Binary Parse Tree with Right-Branching Bias 1: function DISTANCE2TREE(d) 2: if d ̸= [] then 3: i ←arg maxi(d) 4: childl ←Distance2Tree(d<i) 5: childr ←Distance2Tree(d>i) 6: nodebias ←Node(nodei, childr) 7: node ←Node(childl, nodebias) 8: end if 9: return node 10: end function A.3 Details of generating our PTB-Concat-Syn version Mikolov et al. (2010) briefly described the steps of converting from the original Penn Treebank dataset to his version of dataset, which later becomes the standard in language modeling task. We denote this version as PTB-Concat. In our paper, to get strictly the same PTB language modeling dataset, we follow his steps on the original Penn Treebank, while preserving the tree structure. Specifically, we took the following steps: 1. Convert all tokens to lowercase. 2. For tokens which are purely digits, or digits only with “.” or “-” are converted to token “N”. 3. Replace all “$” with “N”. 4. Delete tokens “\\” and “wa” if their POS tags are “POS” and “NNP”, respectively. 5. Delete all tokens that fall into the following list: [‘‘,\’\’,,,.,:,;,-,?,!,¨,ˆ, ,\\,|,˜, -lrb-,-rrb-,-lcb-,-rcb-,(,),[,], {,},<,>,--,...,‘]. 6. Delete all tokens with tag “-NONE-”. 7. Add a special token “</s>” to the end of each sentence. 8. Truncated the vocabulary at 9, 999 according to the frequencies and assign all the out-ofvocabulary tokens a special token “<unk>”. 9. After the above procedures, there are still minor differences to PTB-Concat. We then go through the whole Penn Treebank corpora to manually fix all the unmatched tokens. These procedures ensures we have exactly the same training, validation and test sets as PTBConcat, the only difference is that our datasets has 6623 additional grammar trees retained from the original PTB dataset. The resulting datasets then becomes PTB-Concat-Syn. A.4 PRPN and PRPN-SYD A.4.1 Parse-Read-Predict Network (PRPN) The idea of PRPN builds upon an assumption that to predict a word xi, we only need information for all precedent siblings in constituent tree. The model constitutes three components: (i) a parsing network that calculates the syntactic distance and parsing gates. (ii) a reading network to model the language, and (iii) a predict network to predict the next word. PRPN first uses a two-layer convolutional network to calculate the syntactic distance d at timestep t: hi = ReLU(Wc   ei−L ei−L+1 ... ei  + bc) (20) di = ReLU (Wdhi + bd) (21) Where ei−L, ..., ei are word embeddings, L is the lookback range. Then the difference between distances is fed through hardtanh to model the degree αt j that how much two words xt and xj are related: αt j = hardtanh ((dt −dj) · τ) + 1 2 (22) Where hardtanh(x) = max(−1, min(1, x)), and τ is the temperature parameter. For word xi, the first precedent word xt with a small value αt i represents xt and all its precedents are not likely to be siblings of xi. The following parsing gate gt i models the probability of xt and xi being siblings: gt i = P(lt ≤i) = t−1 Y j=i+1 αt j (23) The reading network is a variant of Long ShortTerm Memory-Network (LSTMN) (Cheng et al., 2016) where the attention score is softly truncated by parsing gates: st i = gt i˜st i P i gt i (24) The predict network utilizes the structure-aware hidden states of reading network to predict the next word. A.4.2 The PRPN-SYD model We re-designed the parsing network. We use LSTM to encode each embedding sequence s = (e0, e1, ..., en),. Because the task of language modeling prohibits seeing future words, we use unidirectional LSTM: h0, ..., hn = LSTMw(e0, ..., en) (25) We stack a convolutional layer on top of the hidden states hi of the LSTM, which helps gather local syntactic information: g0, ..., gn = CONV(h0, ..., hn) (26) Next, syntactical information learned both locally and globally are integrated by using another unidirectional LSTM: ˆh0, ..., ˆhn = LSTMd(g0, ..., gn) (27) We pass the ˆh layer through two 2-layer fullyconnected networks which output two respective sets of distance scalars: dlm i = FFlm(ˆhi) dsyd i = FFsyd(ˆhi) (28) Where dlm is the distance for language modeling while dsyd is for syntactic distance prediction. For two sets of distances, we use the same objective functions as described in ONLSTM-SYD. A.5 Trees We visualize a set of sentences (14 sentences in total) and their corresponding trees in parallel to contrast the qualitative differences of the model induces trees and gold standard trees. Sentences are selected randomly from the dataset. In each of the following figures, we provide three trees for a same sentence, which corresponds to trees induced from the syntactic task (top) and language model task (middle) set of distances, as well as the gold-standard trees (bottom). 6624 boeing is also supposed to send to america west another N twin-engine aircraft as well as a N by year ’s end boeing is also supposed to send to america west another N twin-engine aircraft as well as a N by year ’s end boeing is also supposed to send to america west another N twin-engine aircraft as well as a N by year ’s end Figure 5: Sentence 1. Trees induced from the syntactic task (top) and language model task (middle) set of distances, as well as the gold-standard trees (bottom). that discrepancy hurts quantum badly because its own plants cover only about half of its ethylene needs that discrepancy hurts quantum badly because its own plants cover only about half of its ethylene needs that discrepancy hurts quantum badly because its own plants cover only about half of its ethylene needs Figure 6: Sentence 2. Trees induced from the syntactic task (top) and language model task (middle) set of distances, as well as the gold-standard trees (bottom). britain ’s retail price index rose N.N % in september from august and was up N.N % for the year the central statistical office said britain ’s retail price index rose N.N % in september from august and was up N.N % for the year the central statistical office said britain ’s retail price index rose N.N % in september from august and was up N.N % for the year the central statistical office said Figure 7: Sentence 3. Trees induced from the syntactic task (top) and language model task (middle) set of distances, as well as the gold-standard trees (bottom). 6625 beginning in mid-N prices began accelerating as a growing u.s. economy and the weak dollar spurred demand beginning in mid-N prices began accelerating as a growing u.s. economy and the weak dollar spurred demand beginning in mid-N prices began accelerating as a growing u.s. economy and the weak dollar spurred demand Figure 8: Sentence 4. Trees induced from the syntactic task (top) and language model task (middle) set of distances, as well as the gold-standard trees (bottom). however as expected brazil waited for the crop estimate to come out and then cut the export price of its juice concentrate to about N.N a pound from around N.N however as expected brazil waited for the crop estimate to come out and then cut the export price of its juice concentrate to about N.N a pound from around N.N however as expected brazil waited for the crop estimate to come out and then cut the export price of its juice concentrate to about N.N a pound from around N.N Figure 9: Sentence 5. Trees induced from the syntactic task (top) and language model task (middle) set of distances, as well as the gold-standard trees (bottom). total advertising linage was modestly lower as classified-ad volume increased while there was softer demand for retail and national ad linage said john curley gannett ’s chief executive officer total advertising linage was modestly lower as classified-ad volume increased while there was softer demand for retail and national ad linage said john curley gannett ’s chief executive officer total advertising linage was modestly lower as classified-ad volume increased while there was softer demand for retail and national ad linage said john curley gannett ’s chief executive officer Figure 10: Sentence 6. Trees induced from the syntactic task (top) and language model task (middle) set of distances, as well as the gold-standard trees (bottom). 6626 it ’s turning out to be a real blockbuster mr. sweig said it ’s turning out to be a real blockbuster mr. sweig said it ’s turning out to be a real blockbuster mr. sweig said Figure 11: Sentence 7. Trees induced from the syntactic task (top) and language model task (middle) set of distances, as well as the gold-standard trees (bottom). the fact that this happened two years ago and there was a recovery gives people some comfort that this wo n’t be a problem the fact that this happened two years ago and there was a recovery gives people some comfort that this wo n’t be a problem the fact that this happened two years ago and there was a recovery gives people some comfort that this wo n’t be a problem Figure 12: Sentence 8. Trees induced from the syntactic task (top) and language model task (middle) set of distances, as well as the gold-standard trees (bottom). ncnb will also acquire N million of freedom ’s assets from the rtc which will require N million in assistance ncnb will also acquire N million of freedom ’s assets from the rtc which will require N million in assistance ncnb will also acquire N million of freedom ’s assets from the rtc which will require N million in assistance Figure 13: Sentence 9. Trees induced from the syntactic task (top) and language model task (middle) set of distances, as well as the gold-standard trees (bottom). 6627 when you suggest otherwise you leave the realm of reporting and enter the orbit of speculation when you suggest otherwise you leave the realm of reporting and enter the orbit of speculation when you suggest otherwise you leave the realm of reporting and enter the orbit of speculation Figure 14: Sentence 10. Trees induced from the syntactic task (top) and language model task (middle) set of distances, as well as the gold-standard trees (bottom). but not much money was spent on the shows either a situation that encouraged cheap-to-make talk and game shows while discouraging expensive-to-produce dramas but not much money was spent on the shows either a situation that encouraged cheap-to-make talk and game shows while discouraging expensive-to-produce dramas but not much money was spent on the shows either a situation that encouraged cheap-to-make talk and game shows while discouraging expensive-to-produce dramas Figure 15: Sentence 11. Trees induced from the syntactic task (top) and language model task (middle) set of distances, as well as the gold-standard trees (bottom). it also drops a provision that would have permitted corporations to use excess pension funds to pay health benefits for current retirees it also drops a provision that would have permitted corporations to use excess pension funds to pay health benefits for current retirees it also drops a provision that would have permitted corporations to use excess pension funds to pay health benefits for current retirees Figure 16: Sentence 12. Trees induced from the syntactic task (top) and language model task (middle) set of distances, as well as the gold-standard trees (bottom). 6628 and i think institutions are going to come in and buy and i think institutions are going to come in and buy and i think institutions are going to come in and buy Figure 17: Sentence 13. Trees induced from the syntactic task (top) and language model task (middle) set of distances, as well as the gold-standard trees (bottom). there ’s nothing rational about this kind of action there ’s nothing rational about this kind of action there ’s nothing rational about this kind of action Figure 18: Sentence 14. Trees induced from the syntactic task (top) and language model task (middle) set of distances, as well as the gold-standard trees (bottom).
2020
591
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6629–6639 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6629 Learning Architectures from an Extended Search Space for Language Modeling Yinqiao Li1, Chi Hu1, Yuhao Zhang1, Nuo Xu1, Yufan Jiang1, Tong Xiao1,2∗, Jingbo Zhu1,2, Tongran Liu3, Changliang Li4 1NLP Lab, Northeastern University, Shenyang, China 2NiuTrans Research, Shenyang, China 3CAS Key Laboratory of Behavioral Science, Institute of Psychology, CAS, Beijing, China 4Kingsoft AI Lab, Beijing, China [email protected], {huchinlp,yoohao.zhang}@gmail.com, {xunuo0629,jiangyufan2018}@outlook.com, {xiaotong,zhujingbo}@mail.neu.edu.com, [email protected],[email protected] Abstract Neural architecture search (NAS) has advanced significantly in recent years but most NAS systems restrict search to learning architectures of a recurrent or convolutional cell. In this paper, we extend the search space of NAS. In particular, we present a general approach to learn both intra-cell and inter-cell architectures (call it ESS). For a better search result, we design a joint learning method to perform intra-cell and inter-cell NAS simultaneously. We implement our model in a differentiable architecture search system. For recurrent neural language modeling, it outperforms a strong baseline significantly on the PTB and WikiText data, with a new state-of-the-art on PTB. Moreover, the learned architectures show good transferability to other systems. E.g., they improve state-of-the-art systems on the CoNLL and WNUT named entity recognition (NER) tasks and CoNLL chunking task, indicating a promising line of research on large-scale prelearned architectures. 1 Introduction Neural models have shown remarkable performance improvements in a wide range of natural language processing (NLP) tasks. Systems of this kind can broadly be characterized as following a neural network design: we model the problem via a pre-defined neural architecture, and the resulting network is treated as a black-box family of functions for which we find parameters that can generalize well on test data. This paradigm leads to many successful NLP systems based on well-designed architectures. The earliest of these makes use of recurrent neural networks (RNNs) for representation learning (Bahdanau et al., 2015; Wu et al., 2016), ∗Corresponding author. whereas recent systems have successfully incorporated fully attentive models into language generation and understanding (Vaswani et al., 2017). In designing such models, careful engineering of the architecture plays a key role for the state-ofthe-art though it is in general extremely difficult to find a good network structure. The next obvious step is toward automatic architecture design. A popular method to do this is neural architecture search (NAS). In NAS, the common practice is that we first define a search space of neural networks, and then find the most promising candidate in the space by some criteria. Previous efforts to make NAS more accurate have focused on improving search and network evaluation algorithms. But the search space is still restricted to a particular scope of neural networks. For example, most NAS methods are applied to learn the topology in a recurrent or convolutional cell, but the connections between cells are still made in a heuristic manner as usual (Zoph and Le, 2017; Elsken et al., 2019). Note that the organization of these sub-networks remains important as to the nature of architecture design. For example, the first-order connectivity of cells is essential to capture the recurrent dynamics in RNNs. More recently, it has been found that additional connections of RNN cells improve LSTM models by accessing longer history on language modeling tasks (Melis et al., 2019). Similar results appear in Transformer systems. Dense connections of distant layers help in learning a deep Transformer encoder for machine translation (Shen et al., 2018). A natural question that arises is: can we learn the connectivity of sub-networks for better architecture design? In this paper, we address this issue by enlarging the scope of NAS and learning connections among 6630 ht−1 xt−1 xt ht+1 xt+1 ht (a) Connections in a cell ht−1 xt−1 ht xt ht+1 xt+1 ht−3 ht−2 (b) Connections among cells Figure 1: Examples of intra and inter-cell architectures. sub-networks that are designed in either a handcrafted or automatic way (Figure 1). We call this the Extended Search Space method for NAS (or ESS for short). Here, we choose differentiable architecture search as the basis of this work because it is efficient and gradient-friendly. We present a general model of differentiable architecture search to handle arbitrary search space of NAS, which offers a unified framework of describing intra-cell NAS and inter-cell NAS. Also, we develop a joint approach to learning both high-level and low-level connections simultaneously. This enables the interaction between intra-cell NAS and inter-cell NAS, and thus the ability of learning the full architecture of a neural network. Our ESS method is simple for implementation. We experiment with it in an RNN-based system for language modeling. On the PTB and WikiText data, it outperforms a strong baseline significantly by 4.5 and 2.4 perplexity scores. Moreover, we test the transferability of the learned architecture on other tasks. Again, it shows promising improvements on both NER and chunking benchmarks, and yields new state-of-the-art results on NER tasks. This indicates a promising line of research on largescale pre-learned architectures. More interestingly, it is observed that the inter-cell NAS is helpful in modeling rare words. For example, it yields a bigger improvement on the rare entity recognition task (WNUT) than that on the standard NER task (CoNLL). 2 Related work NAS is a promising method toward AutoML (Hutter et al., 2018), and has been recently applied to NLP tasks (So et al., 2019; Jiang et al., 2019; Li and Talwalkar, 2019). Several research teams have investigated search strategies for NAS. The very early approaches adopted evolutionary algorithms to model the problem (Angeline et al., 1994; Stanley and Miikkulainen, 2002), while Bayesian and reinforcement learning methods made big progresses in computer vision and NLP later (Bergstra et al., 2013; Baker et al., 2017; Zoph and Le, 2017). More recently, gradient-based methods were successfully applied to language modeling and image classification based on RNNs and CNNs (Liu et al., 2019a). In particular, differentiable architecture search has been of great interest to the community because of its efficiency and compatibility to off-the-shelf tools of gradient-based optimization. Despite of great success, previous studies restricted themselves to a small search space of neural networks. For example, most NAS systems were designed to find an architecture of recurrent or convolutional cell, but the remaining parts of the network are handcrafted (Zhong et al., 2018; Brock et al., 2018; Elsken et al., 2019). For a larger search space, Zoph et al. (2018) optimized the normal cell (i.e., the cell that preserves the dimensionality of the input) and reduction cell (i.e., the cell that reduces the spatial dimension) simultaneously and explored a larger region of the space than the singlecell search. But it is still rare to see studies on the issue of search space though it is an important factor to NAS. On the other hand, it has been proven that the additional connections between cells help in RNN or Transformer-based models (He et al., 2016; Huang et al., 2017; Wang et al., 2018, 2019). These results motivate us to take a step toward the automatic design of inter-cell connections and thus search in a larger space of neural architectures. 3 Inter-Cell and Intra-Cell NAS In this work we use RNNs for description. We choose RNNs because of their effectiveness at preserving past inputs for sequential data processing tasks. Note that although we will restrict ourselves to RNNs for our experiments, the method and discussion here can be applied to other types of models. 6631 3.1 Problem Statement For a sequence of input vectors {x1, ..., xT }, an RNN makes a cell on top of every input vector. The RNN cell receives information from previous cells and input vectors. The output at time step t is defined to be: ht = π(ˆht−1, ˆxt) (1) where π(·) is the function of the cell. ˆht−1 is the representation vector of previous cells, and ˆxt is the representation vector of the inputs up to time step t. More formally, we define ˆht−1 and ˆxt as functions of cell states and model inputs, like this ˆht−1 = f(h[0,t−1]; x[1,t−1]) (2) ˆxt = g(x[1,t]; h[0,t−1]) (3) where h[0,t−1] = {h0, ..., ht−1} and x[1,t−1] = {x1, ..., xt−1}. f(·) models the way that we pass information from previous cells to the next. Likewise, g(·) models the case of input vectors. These functions offer a general method to model connections between cells. For example, one can obtain a vanilla recurrent model by setting ˆht−1 = ht−1 and ˆxt = xt, while more intra-cell connections can be considered if sophisticated functions are adopted for f(·) and g(·). While previous work focuses on searching for the desirable architecture design of π(·), we take f(·) and g(·) into account and describe a more general case here. We separate two sub-problems out from NAS for conceptually cleaner description: • Intra-Cell NAS. It learns the architecture of a cell (i.e., π(·)). • Inter-Cell NAS. It learns the way of connecting the current cell with previous cells and input vectors (i.e., f(·) and g(·)). In the following, we describe the design and implementation of our inter-cell and intra-cell NAS methods. 3.2 Differentiable Architecture Search For search algorithms, we follow the method of differentiable architecture search (DARTS). It is gradient-based and runs orders of magnitude faster than earlier methods (Zoph et al., 2018; Real et al., 2019). DARTS represents networks as a directed acyclic graph (DAG) and search for the appropriate architecture on it. For a DAG, the edge oi,j(·) F(α, β) ... ... α Sα ... ... β Sβ Figure 2: Formalizing intra and inter-cell NAS as learning function F(·). between node pair (i, j) performs an operation to transform the input (i.e., tail) to the output (i.e., head). Like Liu et al. (2019a)’s method and others, we choose operations from a list of activation functions, e.g., sigmoid, identity and etc1. A node represents the intermediate states of the networks. For node i, it weights vectors from all predecessor nodes (j < i) and simply sums over them. Let si be the state of node i. We define si to be: si = X j<i X k θi,j k · oi,j k (sj · Wj) (4) where Wj is the parameter matrix of the linear transformation, and θi,j k is the weight indicating the importance of oi,j k (·). Here the subscript k means the operation index. θi,j k is obtained by softmax normalization over edges between nodes i and j: θi,j k = exp(wi,j k )/ P k′ exp(wi,j k′ ). In this way, the induction of discrete networks is reduced to learning continuous variables {θi,j k } at the end of the search process. This enables the use of efficient gradient descent methods. Such a model encodes an exponentially large number of networks in a graph, and the optimal architecture is generated by selecting the edges with the largest weights. The common approach to DARTS constraints the output of the generated network to be the last node that averages the outputs of all preceding nodes. Let sn be the last node of the network. We have sn = 1 n −1 n−1 X i=1 si (5) Given the input vectors, the network found by DARTS generates the result at the final node sn. 1We also consider a special activation function “drop” that unlinks two nodes. 6632 ht−1 xt yt ˆxt ˆht−1 ht xt−1 xt−2 xt−3 ht−1 ht−2 ht−3 ... e1 s1 s2 s3 sn ˆht−1 ˆxt ht Intra-cell ... Avg e1 e2 e3 s1 s2 s3 s4 sn ht−1 ht−2 ht−3 ⊙ Inter-cell ... Avg xt Figure 3: An example of intra-cell and inter-cell NAS in RNN models. Here we present a method to fit this model into intra and inter-cell NAS. We re-formalize the function for which we find good architectures as F(α; β). α and β are two groups of the input vectors. We create DAGs on them individually. This gives us two DAGs with sα and sβ as the last nodes. Then, we make the final output by a Hadamard product of sα and sβ, like this, F(α; β) = sα ⊙sβ (6) See Figure 2 for the network of an example F(α; β). This method transforms the NAS problem into two learning tasks. The design of two separate networks allows the model to group related inputs together, rather than putting everything into a “magic” system of NAS. For example, for the inter-cell function f(·), it is natural to learn the pre-cell connection from h[0,t−1], and learn the impact of the model inputs from x[1,t−1]. It is worth noting that the Hadamard product of sα and sβ is doing something very similar to the gating mechanism which has been widely used in NLP (Dauphin et al., 2017; Bradbury et al., 2017; Gehring et al., 2017). For example, one can learn sβ as a gate and control how much sα is used for final output. Table 1 gives the design of α and β for the functions used in this work. Another note on F(α; β). The grouping reduces a big problem into two cheap tasks. It is particularly important for building affordable NAS systems because computational cost increases exponentially as more input nodes are involved. Our method instead has a linear time complexity if we adopt a reasonable constraint on group size, leading to a Function α β π(·) {ˆht−1, ˆxt} 1 f(·) h[0,t−1] x[1,t−1] g(·) x[1,t] h[0,t−1] Table 1: α and β for different functions possibility of exploring a much larger space during the architecture search process. 3.3 The Intra-Cell Search Space The search of intra-cell architectures is trivial. Since β = 1 and sβ = 1 (see Table 1), we are basically performing NAS on a single group of input vectors ˆht−1 and ˆxt. We follow Liu et al. (2019a)’s work and force the input of networks to be a single layer network of ˆht−1 and ˆxt. This can be described as e1 = tanh(ˆht−1 · W (h) + ˆxt · W (x)) (7) where W (h) and W (x) are parameters of the transformation, and tanh is the non-linear transformation. e1 is the input node of the graph. See Figure 3 for intra-cell NAS of an RNN models. 3.4 The Inter-Cell Search Space To learn ˆht−1 and ˆxt, we can run the DARTS system as described above. However, Eqs. (2-3) define a model with a varying number of parameters for different time steps, in which our architecture search method is not straightforwardly applicable. Apart from this, a long sequence of RNN cells makes the search intractable. 6633 Function JOINTLEARN (rounds, w, W) 1: for i in range(1, rounds) do 2: while intra-cell model not converged do 3: Update intra-cell w(intra) and W 4: while inter-cell model not converged do 5: Update inter-cell w(inter) and W 6: Derive architecture based on w 7: return architecture Figure 4: Joint search of intra-cell and inter-cell architectures. w = edge weights, and W = model parameters. For a simplified model, we re-define f(·) and g(·) as: f(h[0,t−1]; x[1,t−1]) = f′(ht−1; x[t−m,t−1]) (8) g(x[1,t]; h[0,t−1]) = g′(xt; h[t−m,t−1]) (9) where m is a hyper-parameter that determines how much history is considered. Eq. (8) indicates a model that learns a network on x[t−m,t−1] (i.e., β = x[t−m,t−1]). Then, the output of the learned network (i.e., sβ) is used as a gate to control the information that we pass from the previous cell to the current cell (i.e., α = {ht−1}). Likewise, Eq. (9) defines a gate on h[t−m,t−1] and controls the information flow from xt to the current cell. Learning f′(·) and g′(·) fits our method well due to the fixed number of input vectors. Note that f′(·) has m input vectors x[t−m,t−1] for learning the gate network. Unlike what we do in intra-cell NAS, we do not concatenate them into a single input vector. Instead, we create a node for every input vector, that is, the input vector ei = xt−i links with node si. We restrict si to only receive inputs from ei for better processing of each input. This can be seen as a pruned network for the model described in Eq. (4). See Figure 3 for an illustration of inter-cell NAS. 4 Joint Learning for Architecture Search Our model is flexible. For architecture search, we can run intra-cell NAS, or inter-cell NAS, or both of them as needed. However, we found that simply joining intra-cell and inter-cell architectures might not be desirable because both methods were restricted to a particular region of the search space, and the simple combination of them could not guarantee the global optimum. This necessitates the inclusion of interactions between intra-cell and inter-cell architectures into the search process. Generally, the optimal inter-cell architecture depends on the intra-cell architecture used in search, and vice versa. A simple method that considers this issue is to learn two models in a joint manner. Here, we design a joint search method to make use of the interaction between intra-cell NAS and inter-cell NAS. Figure 4 shows the algorithm. It runs for a number of rounds. In each round, we first learn an optimal intra-cell architecture by fixing the inter-cell architecture, and then learn a new inter-cell architecture by fixing the optimal intra-cell architecture that we find just now. Obviously, a single run of intra-cell (or inter-cell) NAS is a special case of our joint search method. For example, one can turn off the inter-cell NAS part (lines 4-5 in Figure 4) and learn intra-cell architectures solely. In a sense, the joint NAS method extends the search space of individual intra-cell (or inter-cell) NAS. Both intra-cell and inter-cell NAS shift to a new region of the parameter space in a new round. This implicitly explores a larger number of underlying models. As shown in our experiments, joint NAS learns intra-cell architectures unlike those of the individual intra-cell NAS, which leads to better performance in language modeling and other tasks. 5 Experiments We experimented with our ESS method on Penn Treebank and WikiText language modeling tasks and applied the learned architecture to NER and chunking tasks to test its transferability. 5.1 Experimental Setup For language modeling task, the monolingual and evaluation data came from two sources. • Penn Treebank (PTB). We followed the standard preprocessed version of PTB (Mikolov et al., 2010). It consisted of 929k training words, 73k validation words and 82k test words. The vocabulary size was set to 10k. • WikiText-103 (WT-103). We also used WikiText-103 (Merity et al., 2017) data to search for a more universal architecture for NLP tasks. This dataset contained a larger training set of 103 million words and 0.2 million words in the validation and test sets. 6634 Dataset Method Search Space Params Perplexity Search Cost intra-cell inter-cell valid test (GPU days) PTB AWD-LSTM (Merity et al., 2018c) 24M 61.2 58.8 Transformer-XL (Dai et al., 2019) 24M 56.7 54.5 Mogrifier LSTM (Melis et al., 2019) 23M 51.4 50.1 ENAS (Pham et al., 2018)  24M 60.8 58.6 0.50 RS (Li and Talwalkar, 2019)  23M 57.8 55.5 2 DARTS†  23M 55.2 53.0 0.25 ESS  23M 54.1 52.3 0.5 ESS   23M 47.9 45.6 0.5 WT-103 QRNN (Merity et al., 2018a) 151M 32.0 33.0 Hebbian + Cache (Rae et al., 2018) 29.9 29.7 Transformer-XL (Dai et al., 2019) 151M 23.1 24.0 DARTS†  151M 31.4 31.6 1 ESS   156M 28.8 29.2 1.5 Table 2: Comparison of language modeling methods on PTB and WikiText-103 tasks (lower perplexity is better). †Obtained by training the corresponding architecture using our setup. NER and chunking tasks were also used to test the transferability of the pre-learned architecture. We transferred the intra and inter-cell networks learned on WikiText-103 to the CoNLL-2003 (English), the WNUT-2017 NER tasks and the CoNLL2000 tasks. The CoNLL-2003 task focused on the newswire text, while the WNUT-2017 contained a wider range of English text which is more difficult to model. Our ESS method consisted of two components, including recurrent neural architecture search and architecture evaluation. During the search process, we ran our ESS method to search for the intra-cell and inter-cell architectures jointly. In the second stage, the learned architecture was trained and evaluated on the test dataset. For architecture search on language modeling tasks, we applied 5 activation functions as the candidate operations, including drop, identity, sigmoid, tanh and relu. On the PTB modeling task, 8 nodes were equipped in the recurrent cell. For the intercell architecture, it received 3 input vectors from the previous cells and consisted of the same number of the intermediate nodes. By default, we trained our ESS models for 50 rounds. We set batch = 256 and used 300 hidden units for the intra-cell model. The learning rate was set as 3 × 10−3 for the intracell architecture and 1 × 10−3 for the inter-cell architecture. The BPTT (Werbos, 1990) length was 35. For the search process on WikiText-103, we developed a more complex model to encode the representation. There were 12 nodes in each cell and 5 nodes in the inter-cell networks. The batch size was 128 and the number of hidden units was 300 which was the same with that on the PTB task. We set the intra-cell and inter-cell learning rate to 1 × 10−3 and 1 × 10−4. A larger window size (= 70) for BPTT was applied for the WikiText103. All experiments were run on a single NVIDIA 1080Ti. After the search process, we trained the learned architectures on the same data. To make it comparable with previous work, we copied the setup in Merity et al. (2018b). For PTB, the size of hidden layers was set as 850 and the training epoch was 3,000. While for the WikiText-103, we enlarged the number of hidden units to 2,500 and trained the model for 30 epochs. Additionally, we transferred the learned architecture to NER and chunking tasks with the setting in Akbik et al. (2019). We only modified the batch size to 24 and hidden size to 512. 5.2 Results 5.2.1 Language Modeling tasks Here we report the perplexity scores, number of parameters and search cost on the PTB and WikiText103 datasets (Table 2). First of all, the joint ESS method improves the performance on language modeling tasks significantly. Moreover, it does not introduce many parameters. Our ESS method achieves state-of-the-art result on the PTB task. It outperforms the manually designed MogrifierLSTM by 4.5 perplexity scores on the test set. On 6635 10/1 9/2 8/3 7/4 6/5 5/6 4/7 3/8 2/9 1/10 59.5 63.5 67.5 Number of nodes (intra/inter) Perplexity NAS Figure 5: Perplexity on the validation data (PTB) vs. number of nodes in intra and inter-cell. the WikiText task, it still yields a +2.4 perplexity scores improvement over the strong NAS baseline (DARTS) method. These results indicate that ESS is robust and can learn better architectures by enlarging the scope of search space. Also, we find that searching for the appropriate connections among cells plays a more important role in improving the model performance. We observe that the intra-cell NAS (DARTS) system underperforms the inter-cell counterpart with the same number of parameters. It is because the welldesigned intra-cell architectures (e.g., MogrifierLSTM) are actually competitive with the NAS structures. However, the fragile connections among different cells greatly restrict the representation space. The additional inter-cell connections are able to encode much richer context. Nevertheless, our ESS method does not defeat the manual designed Transformer-XL model on the WikiText-103 dataset, even though ESS works better than other RNN-based NAS methods. This is partially due to the better ability of Transformer-XL to capture the language representation. Note that RNNs are not good at modeling the long-distance dependence even if more history states are considered. It is a good try to apply ESS to Transformer but this is out of the scope of this work. 5.2.2 Sensitivity Analysis To modulate the complexity of the intra and intercell, we study the system behaviors under different numbers of intermediate nodes (Figure 5). Fixing the number of model parameters, we compare these systems under different numbers of the intra and inter-cell nodes. Due to the limited space, we show the result on the PTB in the following sensitivity analysis. We observe that an appropriate choice of node number (8 nodes for intra-cell and 3 nodes for inter-cell) brings a consistent improvement. More interestingly, we find that too many nodes for inter-cell architecture do not improve the model representation ability. This is reasonable 0.5K 2K 3.5K 5K 400 550 700 850 # of Training Steps Perplexity joint intra 0.5K 2K 3.5K 5K 0.00 0.15 0.30 0.45 0.60 # of Training Steps MAD intra inter Figure 6: Perplexity on the validation data (PTB) and Mean Absolute Deviation (MAD) between edge weights and uniform distribution vs. number of training steps. Word Count ∆loss Word Count ∆loss mcmoran 11 -0.74 the 59421 -0.009 cie. 9 -0.66 <unk > 53299 -0.004 mall 13 -0.65 <eos > 49199 -0.010 missile 23 -0.55 N 37607 -0.008 siemens 12 -0.51 of 28427 -0.008 baldwin 9 -0.51 to 27430 -0.004 nfl 21 -0.49 a 24755 -0.013 prime-time 17 -0.47 in 21032 -0.015 Table 3: Difference in word loss (normalized by word counts) on validation data when searching intra and inter-cell jointly. The left column contains the words with eight best improvements (larger absolute value of ∆loss) and right column presents the most frequent words in the validation data. because more inter-cell nodes refer to considering more history in our system. But for language modeling, the current state is more likely to be relevant to most recent words. Too many inputs to the gate networks raise difficulties in modeling. We observe that our ESS method leads to a model that is easier to train. The left part in Figure 6 plots the validation perplexity at different training steps. The loss curve of joint ESS significantly goes down as the training proceeds. More interestingly, our joint learning method makes the model achieve a lower perplexity than the intra-cell NAS system. This indicates better networks can be obtained in the search process. Additionally, the convergence can be observed from the right part in Figure 6. Here we apply Mean Absolute Deviation (MAD) to define the distance between edge weights and initial uniform distribution. It is obvious that both the intra and inter-cell architectures change little at the final searching steps. In order to figure out the advantage of inter-cell connections, we detail the model contribution on each word on the validation data. Specifically, we compute the difference in word loss function (i.e., 6636 𝑥! ℎ!"# 0 1 2 3 4 5 6 8 7 ℎ! identity relu sigmoid relu sigmoid relu identity relu identity identity relu relu identity identity relu sigmoid (a) An intra-cell architecture found by using inter-cell connections (b) An intra-cell architecture found without using inter-cell connections Figure 7: Comparison of intra-cell architectures found by using and not using additional inter-cell connections Models F1 LSTM-CRF (Lample et al., 2016) 90.94 LSTM-CRF + ELMo (Peters et al., 2018) 92.22 LSTM-CRF + Flair (Akbik et al., 2019) 93.18 GCDT + BERTLARGE (Liu et al., 2019b) 93.47 CNN Large + ELMo (Baevski et al., 2019) 93.50 DARTS + Flair (Jiang et al., 2019) 93.13 I-DARTS + Flair (Jiang et al., 2019) 93.47 ESS 91.78 ESS + Flair 93.62 Table 4: F1 scores on CoNLL-2003 NER task. BiLSTM log perplexity) between methods with and without inter-cell NAS. The words with eight best improvements are shown in the left column of Table 3. We observe that the rare words in the training set obtain more significant improvements. In contrast, the most frequent words lead to very modest decrease in loss (right column of Table 3). This is because the connections between multiple cells enable learning rare word representations from more histories. While for common words, they can obtain this information from rich contexts. More inputs from previous cells do not bring much useful information. Additionally, we visualize the learned intracell architecture in Figure 7(a). The networks are jointly learned with the inter-cell architecture. Compared with the results of intra-cell NAS (Figure 7(b)), the learned network is more shallow. The inter-cell architectures have deeper networks. This in turn reduces the need for intra-cell capacity. Thus a very deep intra-cell architecture might not be necessary if we learn the whole model jointly. 5.2.3 Transferring to Other Tasks After architecture search, we test the transferability of the learned architecture. In order to apply the model to other tasks, we directly use the architecture searched on WikiText-103 and train the paramModels F1 Cross-BiLSTM-CNN (Aguilar et al., 2018) 45.55 Flair (Akbik et al., 2019) 50.20 DARTS + Flair† 50.34 ESS 48.85 ESS + Flair 52.18 Table 5: F1 scores on WNUT-2017 NER task. †Obtained by training the corresponding architecture using our setup. Models F1 NCRF++ (Yang and Zhang, 2018) 95.06 BiLSTM-CRF + IntNet (Xin et al., 2018) 95.29 Flair (Akbik et al., 2019) 96.72 GCDT + BERTLARGE (Liu et al., 2019b) 97.30 DARTS + Flair† 96.59 ESS 95.51 ESS + Flair 97.22 Table 6: F1 scores on CoNLL-2000 chunking task. †Obtained by training the corresponding architecture using our setup. eters with the in-domain data. In our experiments, we adapt the model to CoNLL-2003, WNUT-2017 NER tasks and CoNLL-2000 chunking task. For the two NER tasks, it achieves new stateof-the-art F1 scores (Table 4 and Table 5). ELMo, Flair and BERTLARGE refer to the pre-trained language models. We apply these word embeddings to the learned architecture during model training process. For the chunking task, the learned architecture also shows greater performance than other NAS methods (Table 6). Moreover, we find that our pre-learned neural networks yield bigger improvements on the WNUT-2017 task. The difference of the two NER tasks lies in that the WNUT-2017 task is a long-tail emerging entities recognition task. It focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. As we discuss in the previous part of the section, the additional inter-cell NAS is good at learning the representations of rare words. Therefore, it makes 6637 sense to have a bigger improvement on WNUT2017. 6 Conclusions We have proposed the Extended Search Space (ESS) method of NAS. It learns intra-cell and inter-cell architectures simultaneously. Moreover, we present a general model of differentiable architecture search to handle the arbitrary search space. Meanwhile, the high-level and low-level sub-networks can be learned in a joint fashion. Experiments on two language modeling tasks show that ESS yields improvements of 4.5 and 2.4 perplexity scores over a strong RNN-based baseline. More interestingly, it is observed that transferring the pre-learned architectures to other tasks also obtains a promising performance improvement. Acknowledgments This work was supported in part by the National Science Foundation of China (Nos. 61876035 and 61732005), the National Key R&D Program of China (No. 2019QY1801) and the Opening Project of Beijing Key Laboratory of Internet Culture and Digital Dissemination Research. The authors would like to thank anonymous reviewers for their comments. References Gustavo Aguilar, Adrian Pastor L´opez-Monroy, Fabio Gonz´alez, and Thamar Solorio. 2018. Modeling noisiness to recognize named entities using multitask neural networks on social media. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1401–1412, New Orleans, Louisiana. Association for Computational Linguistics. Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019. Pooled contextualized embeddings for named entity recognition. In NAACL 2019, 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics, page 724–728. Peter J. Angeline, Gregory M. Saunders, and Jordan B. Pollack. 1994. An evolutionary algorithm that constructs recurrent neural networks. IEEE Trans. Neural Networks, 5(1):54–65. Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, and Michael Auli. 2019. Cloze-driven pretraining of self-attention networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5359–5368, Hong Kong, China. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. 2017. Designing neural network architectures using reinforcement learning. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. James Bergstra, Daniel Yamins, and David D. Cox. 2013. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pages 115–123. James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2017. Quasi-recurrent neural networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Andrew Brock, Theodore Lim, James M. Ritchie, and Nick Weston. 2018. SMASH: one-shot model architecture search through hypernetworks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978–2988, Florence, Italy. Association for Computational Linguistics. Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 933–941. Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. 2019. Efficient multi-objective neural architecture search via lamarckian evolution. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceed6638 ings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 1243–1252. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. 2017. Densely connected convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 2261–2269. Frank Hutter, Lars Kotthoff, and Joaquin Vanschoren, editors. 2018. Automated Machine Learning: Methods, Systems, Challenges. Springer. In press, available at http://automl.org/book. Yufan Jiang, Chi Hu, Tong Xiao, Chunliang Zhang, and Jingbo Zhu. 2019. Improved differentiable architecture search for language modeling and named entity recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3583–3588, Hong Kong, China. Association for Computational Linguistics. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 260–270. Liam Li and Ameet Talwalkar. 2019. Random search and reproducibility for neural architecture search. In Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI 2019, Tel Aviv, Israel, July 22-25, 2019, page 129. Hanxiao Liu, Karen Simonyan, and Yiming Yang. 2019a. DARTS: differentiable architecture search. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. Yijin Liu, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, and Jie Zhou. 2019b. GCDT: A global context enhanced deep transition architecture for sequence labeling. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 2431–2441. G´abor Melis, Tom´aˇs Koˇcisk´y, and Phil Blunsom. 2019. Mogrifier lstm. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018a. An analysis of neural language modeling at multiple scales. CoRR, abs/1803.08240. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018b. An analysis of neural language modeling at multiple scales. CoRR, abs/1803.08240. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018c. Regularizing and optimizing LSTM language models. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Stephen Merity, Bryan McCann, and Richard Socher. 2017. Revisiting activation regularization for language rnns. Tomas Mikolov, Martin Karafi´at, Luk´as Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, September 26-30, 2010, pages 1045–1048. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 2227–2237. Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. 2018. Efficient neural architecture search via parameter sharing. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsm¨assan, Stockholm, Sweden, July 10-15, 2018, pages 4092–4101. Jack W. Rae, Chris Dyer, Peter Dayan, and Timothy P. Lillicrap. 2018. Fast parametric learning with activation memorization. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsm¨assan, Stockholm, Sweden, July 10-15, 2018, pages 4225–4234. Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. 2019. Regularized evolution for image classifier architecture search. volume 33, page 4780–4789. Association for the Advancement of Artificial Intelligence (AAAI). Yanyao Shen, Xu Tan, Di He, Tao Qin, and Tie-Yan Liu. 2018. Dense information flow for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 1294–1303. David R. So, Quoc V. Le, and Chen Liang. 2019. The evolved transformer. In Proceedings of the 36th International Conference on Machine Learning, ICML 6639 2019, 9-15 June 2019, Long Beach, California, USA, pages 5877–5886. Kenneth O. Stanley and Risto Miikkulainen. 2002. Evolving neural networks through augmenting topologies. Evol. Comput., 10(2):99–127. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000–6010. Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. 2019. Learning deep transformer models for machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1810–1822, Florence, Italy. Association for Computational Linguistics. Qiang Wang, Fuxue Li, Tong Xiao, Yanyang Li, Yinqiao Li, and Jingbo Zhu. 2018. Multi-layer representation fusion for neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 3015–3026. Paul J Werbos. 1990. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550–1560. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. Yingwei Xin, Ethan Hart, Vibhuti Mahajan, and JeanDavid Ruvini. 2018. Learning better internal structure of words for sequence labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2584–2593, Brussels, Belgium. Association for Computational Linguistics. Jie Yang and Yue Zhang. 2018. NCRF++: An opensource neural sequence labeling toolkit. In Proceedings of ACL 2018, System Demonstrations, pages 74–79, Melbourne, Australia. Association for Computational Linguistics. Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, and Cheng-Lin Liu. 2018. Practical block-wise neural network architecture generation. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 1822, 2018, pages 2423–2432. Barret Zoph and Quoc V. Le. 2017. Neural architecture search with reinforcement learning. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. 2018. Learning transferable architectures for scalable image recognition. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 8697–8710.
2020
592
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6640–6651 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6640 The Right Tool for the Job: Matching Model and Instance Complexities Roy Schwartz♦♠ Gabriel Stanovsky♦♠ Swabha Swayamdipta♦ Jesse Dodge♣∗ Noah A. Smith♦♠ ♦Allen Institute for Artificial Intelligence ♠Paul G. Allen School of Computer Science & Engineering, University of Washington ♣School of Computer Science, Carnegie Mellon University {roys,gabis,swabhas,jessed,noah}@allenai.org Abstract As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs. To better respect a given inference budget, we propose a modification to contextual representation fine-tuning which, during inference, allows for an early (and fast) “exit” from neural network calculations for simple instances, and late (and accurate) exit for hard instances. To achieve this, we add classifiers to different layers of BERT and use their calibrated confidence scores to make early exit decisions. We test our proposed modification on five different datasets in two tasks: three text classification datasets and two natural language inference benchmarks. Our method presents a favorable speed/accuracy tradeoff in almost all cases, producing models which are up to five times faster than the state of the art, while preserving their accuracy. Our method also requires almost no additional training resources (in either time or parameters) compared to the baseline BERT model. Finally, our method alleviates the need for costly retraining of multiple models at different levels of efficiency; we allow users to control the inference speed/accuracy tradeoff using a single trained model, by setting a single variable at inference time. We publicly release our code.1 1 Introduction The large increase in the size of artificial intelligence models often increases production costs (Amodei and Hernandez, 2018; Schwartz et al., 2019), and can also limit adoption on real-time devices. Compared to training, which is a one-time large investment, inference costs are incurred for every instance in production, and can thus add up ∗Research completed during an internship at AI2. 1github.com/allenai/sledgehammer Layer 0 Layer i Layer k Layer n Input Layer l Layer j Is confident? Yes No Is confident? Yes No Is confident? Yes No Prediction Early exit prediction Early exit prediction Early exit prediction Figure 1: An illustration of our approach. Some layers of a BERT-large model are attached to output classifiers, which make their respective predictions. The confidence of each layer-wise prediction is computed. If high enough, the model takes an early exit, avoiding the computation associated with successive (higher) layers (grayed out). Otherwise, the model continues to the next layer/classifier. significantly. For instance, Microsoft reports that using BERT (Devlin et al., 2019) to process Bing queries requires more than 2,000 GPUs concurrently.2 We present a method to reduce the inference cost of today’s common models in NLP: fine-tuned contextual word representations. Our method exploits variation along two axes: models differ in size and cost, and instances vary in difficulty. Our method assesses the complexity of each test instance and matches it with the most efficient model in our “toolbelt.”3 As a result, some instances, which we refer to in this paper as “easy” or “simple,” can be solved by small models, leading to computational savings, while other instances (termed “hard” or “difficult”) have access to larger models, thus 2https://tinyurl.com/tzhj3o8 3Our approach should not be confused with model ensembles (Kuncheva and Whitaker, 2003), where the prediction of multiple models is combined, on every instance, in order to improve accuracy, at the expense of slower inference time. 6641 retaining good performance. We apply our method to the BERT-large model, modifying its fine-tuning procedure by adding multiple output layers to some of its original ℓ= 24 layers.4 A classifier at the kth layer, is more efficient, though (presumably) less accurate than a classifier at a later ℓth layer (where ℓ> k). At inference time, we run each instance on these classifiers in increasing order of depth. For each classification decision, we use its confidence as an inferencestopping criterion, continuing to the next, larger classifier only if the current classifier is not confident enough in its prediction. Since confidence scores play an important role, we use calibration techniques to make them more reliable. Associating classifiers with different layers of the same network allows us to reuse the computation performed by the simple classifiers for the complex ones. See Figure 1 for an illustration. We experiment with three text classification benchmarks and two natural language inference (NLI) benchmarks. We consider each of our classifiers with different BERT layers as individual baselines. We find that using our method leads to a consistently better speed/accuracy tradeoff in almost all cases. In particular, in some cases, we obtain similar performance while being as much as five times faster than our strongest baseline (the original BERT-large mode with a single classification layer after the last layer). Our approach, while allowing substantially faster inference compared to the standard BERTlarge model, is neither slower to fine-tune nor significantly larger in terms of parameters, requiring less than 0.005% additional parameters. Moreover, our method is quite flexible: unlike other approaches for inference speed-up such as model distillation or pruning, which require training a different model for each point along the speed/accuracy curve, our method only requires training a single model, and by setting a single variable at inference time—the confidence threshold—supports each point along that curve. Finally, our method is orthogonal to compression methods such as model distillation (Hinton et al., 2014). Our experiments with a distilled version of BERT (Jiao et al., 2019) show that our method further improves the speed/accuracy curve on top of that model. We 4For simplicity, we refer to these output layers as classifiers, though our method can also be applied to nonclassification tasks. publicly release our code.5 2 Premise: Models Vary in Size, Examples Vary in Complexity Our goal in this paper is to make model inference more efficient. Our premise relies on two general observations: first, as NLP models become bigger (e.g., in number of parameters), they become both better (in terms of downstream task accuracy), and slower to run. This trend is consistently observed, most notably in recent contextual representations work that compares different variants of the same model (Devlin et al., 2019; Radford et al., 2019; Raffel et al., 2019, inter alia). Second, inputs are not equally difficult. For example, instances differ in length and wealth of linguistic phenomena, which affects the amount of processing required to analyze them. Consider the examples below for the task of sentiment analysis: (1) The movie was awesome. (2) I can’t help but wonder whether the plot was written by a 12 year-old or by an awardwinning writer. Sentence 1 is short and simple to process. In contrast, Sentence 2 is long, contains misleading positive phrases (“award-winning writer”), and uses figurative speech (“the plot was written by a 12 year-old”). As a result, it is potentially harder to process.6 This work leverages these two observations by introducing a method to speed-up inference by matching simple instances with small models, and complex instances with large models. 3 Approach: The Right Tool for the Job Motivation We assume a series of n trained models m1, . . . , mn for a given task, such that for each 1 < i ≤n, mi is both more accurate than mi−1 (as measured by a performance on validation data) and more expensive to execute. Current practice in NLP, which favors accuracy rather than efficiency (Schwartz et al., 2019), would typically run mn on each test instance, as it would likely lead to the highest test score. However, many of the test instances could be solved by simpler (and faster) 5github.com/allenai/sledgehammer 6Note that simplicity is task-dependent. For example, in topic classification, models often accumulate signal across a document, and shorter inputs (with less signal) may be more difficult than longer ones. See Section 6. 6642 models; if we had an oracle that identifies the smallest model that solves a given instance, we could use it to substantially speed up inference. Our goal is to create an automatic measure which approximates the behavior of such an oracle, and identify the cheapest accurate model for each instance. BERT-large To demonstrate our approach, we consider the BERT-large model (Devlin et al., 2019), based on a transformer architecture (Vaswani et al., 2017) with 24 layers. To apply BERT-large to some downstream task, an output layer is typically added to the final layer of the model, and the model is fine-tuned on training data for that task. To make a prediction using the classifier on the final layer, the computation goes through all the layers sequentially, requiring more computation than a shallower model with fewer layers, which would suffice in some cases. Suite of models Our approach leverages BERT’s multilayered structure by adding an output layer to intermediate layers of the model. For k < ℓ, the output layer after k BERT layers exits the model earlier than a deeper output layer ℓ, and therefore yields a more efficient (but potentially less accurate) prediction. Confidence scores for early exit decisions To make early exit decisions, we calculate the layerwise BERT representations sequentially. As we reach a classification layer, we use it to make predictions. We interpret the label scores output by softmax as confidence scores. We use these confidence scores to decide whether to exit early or continue to the next (more expensive and more accurate) classifier. See Figure 1 for an illustration. Training details To train the model, we use the standard way of applying BERT to downstream tasks—fine-tuning the pre-trained weights, while learning the weights of the randomly initialized classifier, where here we learn multiple classifiers instead of one. As our loss function, we sum the losses of all classification layers, such that lower layers are trained to both be useful as feature generators for the higher layers, and as input to their respective classifiers. This also means that every output layer is trained to perform well on all instances. Importantly, we do not perform early exits during training, but only during inference. To encourage monotonicity in performance of the different classifiers, each classifier at layer k is given as input a weighted sum of all the layers up to and including k, such that the weight is learned during fine-tuning (Peters et al., 2018).7 Calibration Classifiers’ confidence scores are not always reliable (Jiang et al., 2018). One way to mitigate this concern is to use calibration, which encourages the confidence level to correspond to the probability that the model is correct (DeGroot and Fienberg, 1983). In this paper we use temperature calibration, which is a simple technique that has been shown to work well in practice (Guo et al., 2017), in particular for BERT fine-tuning (Desai and Durrett, 2020). The method learns a single parameter, denoted temperature or T, and divides each of the logits {zi} by T before applying the softmax function: pred = arg max i exp(zi/T) P j exp(zj/T) We select T to maximize the log-likelihood of the development dataset. Note that temperature calibration is monotonic and thus does not influence predictions. It is only used in our model to make early-exit decisions. Discussion Our approach has several attractive properties. First, if mi is not sufficiently confident in its prediction, we reuse the computation and continue towards mi+1 without recomputing the BERT layers up to mi. Second, while our model is larger in terms of parameters compared to the standard approach due to the additional classification layers, this difference is marginal compared to the total number of trainable parameters: our experiments used 4 linear output layers instead of 1, which results in an increase of 6K (binary classification) to 12K (4-way classification) parameters. For the BERT-large model with 335M trainable parameters, this is less than 0.005% of the parameters. Third, as our experiments show (Section 5), while presenting a much better inference time/accuracy tradeoff, fine-tuning our model is as fast as fine-tuning the standard model with a single output layer. Moreover, our model allows for controlling this tradeoff by setting the confidence threshold at inference time, allowing users to better utilize the model for their inference budget. 7We also considered feeding the output of previous classifiers as additional features to subsequent classifiers, known as stacking (Wolpert, 1992). Preliminary experiments did not yield any benefits, so we did not further pursue this direction. 6643 Name #labels Train Val. Test AG 4 115K 0.5K 7.6K IMDB 2 020K 0.5K .25K SST 2 007K 0.9K 1.8K SNLI 3 550K .10K .10K MNLI 3 393K 9.8K 9.8K Table 1: Number of labels and instances for the datasets in our experiments. The top set are text classification datasets, and the bottom set are NLI datasets. 4 Experiments To test our approach, we experiment with three text classification and two natural language inference (NLI) tasks in English. NLI is a pairwise sentence classification task, where the goal is to predict whether a hypothesis sentence entails, contradicts or is neutral to a premise sentence (Dagan et al., 2005). Below we describe our datasets, our baselines, and our experimental setup. Datasets For text classification, we experiment with the AG news topic identification dataset (Zhang et al., 2015) and two sentiment analysis datasets: IMDB (Maas et al., 2011) and the binary Stanford sentiment treebank (SST; Socher et al., 2013).8 For NLI, we experiment with the SNLI (Bowman et al., 2015) and MultiNLI (MNLI; Williams et al., 2018) datasets. We use the standard train-development-test splits for all datasets except for MNLI, for which there is no public test set. As MNLI contains two validation sets (matched and mismatched), we use the matched validation set as our validation set and the mismatched validation set as our test set. See Table 1 for dataset statistics. Baselines We use two types of baselines: running BERT-large in the standard way, with a single output layer on top of the last layer, and three efficient baselines of increasing size (Figure 2). Each is a fine-tuned BERT model with a single output layer after some intermediate layer. Importantly, these baselines offer a speed/accuracy tradeoff, but not within a single model like our approach. As all baselines have a single output layer, they all have a single loss term, such that BERT layers 1, . . . , k only focus on a single classification layer, rather than multiple ones as in our approach. As with our model, the single output layer in each of 8For SST, we only used full sentences, not phrases. our baselines is given as input a learned weighted sum of all BERT layers up to the current layer. As an upper bound to our approach, we consider a variant of our model that uses the exact amount of computation required to solve a given instance. It does so by replacing the confidence-based earlyexit decision function with an oracle that returns the fastest classifier that is able to solve that instance, or the fastest classifier for instances that are not correctly solved by any of the classifiers. Experimental setup We experiment with BERTlarge-uncased (24 layers). We add output layers to four layers: 0, 4, 12 and 23.9 We use the first three layer indices for our efficient baselines (the last one corresponds to our standard baseline). See Appendix A for implementation details. For training, we use the largest batch size that fits in our GPU memory for each dataset, for both our baselines and our model. Our approach relies on discrete early-exit decisions that might differ between instances in a batch. For the sake of simplicity, we use a batch size of 1 during inference. This is useful for production setups where instances arrive one by one. Larger batch sizes can be applied using methods such as budgeted batch classification (Huang et al., 2018), which specify a budget for the batch and select a subset of the instances to fit that budget, while performing early exit for the rest of the instances. We defer the technical implementation of this idea to future work. To measure efficiency, we compute the average runtime of a single instance, across the test set. We repeat each validation and test experiment five times and report the mean and standard deviation. At prediction time, our method takes as an input a threshold between 0 and 1, which is applied to each confidence score to decide whether to exit early. Lower thresholds result in earlier exits, with 0 implying the most efficient classifier is always used. A threshold of 1 always uses the most expensive and accurate classifier. 5 Results A better speed/accuracy tradeoff. Figure 3 presents our test results.10 The blue line shows our model, where each point corresponds to an increasingly large confidence threshold. The leftmost 9Preliminary experiments with other configurations, including ones with more layers, led to similar results. 10For increased reproduciblity (Dodge et al., 2019a), we also report validation results in Appendix B. 6644 Prediction Layer 0 Layer i Layer k Layer n Input Layer l Layer j (a) Efficient Baseline Prediction Layer 0 Layer i Layer k Layer n Input Layer l Layer j (b) Standard Baseline Layer 0 Layer i Layer k Layer n Input Layer l Layer j Is confident? Yes No Is confident? Yes No Is confident? Yes No Prediction Early exit prediction Early exit prediction Early exit prediction (c) Our approach Figure 2: Illustration of our baselines. (2a) Efficient baseline: adding a single output layer to an intermediate layer, while not processing the remaining BERT layers. (2b) The standard model: adding a single output layer to the final BERT layer. (2c) Our approach: adding multiple output layers to intermediate BERT layers; running the corresponding classifiers sequentially, while taking early exits based on their confidence scores. (rightmost) point is threshold 0 (1), with x-value showing the fraction of processing time relative to the standard baseline. Our first observation is that our efficient baselines constitute a fast alternative to the standard BERT-large model. On AG, a classifier trained on layer 12 of BERT-large is 40% faster and within 0.5% of the standard model. On SNLI and IMDB a similar speedup results in 2% loss in performance. Most notably, our approach presents a similar or better tradeoff in almost all cases. Our model is within 0.5% of the standard model while being 40% (IMDB) and 80% (AG) faster. For SST, our curve is strictly above two of the efficient baselines, while being below the standard one. In the two NLI datasets, our curve is slightly above the curve for the medium budgets, and below it for lower ones. Finally, the results of the oracle baseline indicate the further potential of our approach: in all cases, the oracle outperforms the original baseline by 1.8% (AG) to 6.9% (MNLI), while being 4–6 times faster. These results motivate further exploration of better early-exit criteria (see Section 6). They also highlight the diversity of the different classifiers. One might expect that the set of correct predictions by the smaller classifiers will be contained in the corresponding sets of the larger classifiers. The large differences between the original baseline and our oracle indicate that this is not the case, and motivate future research on efficient ensemble methods which reuse much of the computation across different models. Extreme case analysis Our results hint that combining the loss terms of each of our classifiers hurts their performance compared to our baselines, which use a single loss term. For the leftmost point in our graphs—always selecting the most efficient classifier—we observe a substantial drop in performance compared to the corresponding most efficient baseline, especially for the NLI datasets. For our rightmost point (always selecting the most accurate classifier), we observe a smaller drop, mostly in SST and MNLI, compared to the corresponding baseline, but also slower runtime, probably due to the overhead of running the earlier classifiers. These trends further highlight the potential of our method, which is able to outperform the baseline speed-accuracy curves despite the weaker starting point. It also suggests ways to further improve our method by studying more sophisticated methods to combine the loss functions of our classifiers, and encourage them to be as precise as our baselines. We defer this to future work. Similar training time Fine-tuning BERT-large with our approach has a similar cost to fine-tuning the standard BERT-large model, with a single output layer. Table 2 shows the fine-tuning time of our model and the standard BERT-large baseline. Our model is not slower to fine-tune in four out of five cases, and is even slightly faster in three of them.11 This property makes our approach appealing compared to other approaches for reducing runtime such as pruning or model distillation (Section 7). These require, in addition to training the full model, also training another model for each point along the speed/accuracy curve, therefore substantially increasing the overall training time required to gen11We note that computing the calibration temperature requires additional time, which ranges between 3 minutes (SST) to 24 minutes (MNLI). 6645 Figure 3: Test accuracy and processing time of our approach (blue squares, each point representing a different confidence threshold), our standard baseline (std., green diamond), efficient baselines (eff., red dots), and oracle baseline (orange star). Left and higher is better. Our method presents similar or better speed/accuracy tradeoff in almost all cases. Dataset Training Time Ours Standard AG 052 053 IMDB 056 057 SST 004 004 SNLI 289 300 MNLI 852 835 Table 2: Fine-tuning times (in minutes) of our model compared to the most accurate baseline: the standard BERT-large model with a single output layer. erate a full speed/accuracy tradeoff. In contrast, our single model allows for full control over this tradeoff by adjusting the confidence threshold, without increasing the training time compared to the standard, most accurate model. Combination with model distillation A key property of our approach is that it can be applied to any multi-layer model. Particularly, it can be combined with other methods for making models more efficient, such as model distillation. To demonstrate this, we repeat our IMDB experiments with tinyBERT (Jiao et al., 2019), which is a distilled version of BERT-base.12 We experiment with the tinyBERT v2 6-layer-768dim version.13 Figure 4 shows our IMDB results. Much like for BERT-large, our method works well for tinyBERT, providing a better speed/accuracy tradeoff compared to the standard tinyBERT baseline and the efficient tinyBERT baselines. Second, while tinyBERT is a distilled version of BERT-base, its speed-accuracy tradeoff is remarkably similar to our BERT-large efficient baselines, which hints that our efficient baselines are a simpler alternative to tinyBERT, and as effective for model compression. Finally, our method applied to BERT-large provides the best overall speedaccuracy tradeoff, especially with higher budgets. 6 A Criterion for “Difficulty” Our approach is motivated by the inherent variance in the level of complexity of text instances, and leverages this variance to obtain a better 12While we experimented with BERT-large and not BERTbase, the point of this experiment is to illustrate the potential of our method to be combined with distillation, and not to directly compare to our main results. 13Jiao et al. (2019) also suggested a task-specific version of tinyBERT which distills the model based on the downstream task. For consistency with our BERT-large experiments, we use the general version. 6646 Figure 4: Experiments with tinyBERT. Our method (light-blue pentagons) provides a better speed-accuracy tradeoff compared to the standard (light-green diamonds) and efficient (small light-red dots) baselines. For comparison, we also show the results of our method (blue squares) and our efficient baselines (large red dots) with BERT-large. Our method applied to BERTlarge provides the overall best tradeoff. Dataset Length Consistency AG –0.13 0.37 IMDB –0.17 0.47 SST –0.19 0.36 SNLI –0.08 0.44 MNLI –0.13 0.39 Table 3: Spearman’s ρ correlation between confidence levels for our most efficient classifier and two measures of difficulty: document length and consistency. Confidence is correlated reasonably with consistency across all datasets. For all datasets except AG, confidence is (loosely) negatively correlated with document length. For the AG topic classification dataset, confidence is (loosely) positively correlated. Results for the other layers show a similar trend. speed/accuracy tradeoff compared to our baselines. Our method also automatically identifies instances on which smaller models are highly confident in their predictions. Here we analyze our data using other definitions of difficulty. Perhaps surprisingly, we find that the various definitions are not strongly correlated with ours. The results we observe below, combined with the performance of our oracle baseline (Section 5), motivate further study on more advanced methods for early exiting, which could potentially yield even larger computational gains. Shorter is easier? We first consider the length of instances: is our model more confident in its decisions on short documents compared to longer ones? To address this we compute Spearman’s ρ correlation between the confidence level of our most efficient classifier and the document’s length. The results in Table 3 show that the correlations across all datasets are generally low (|ρ| < 0.2). Moreover, as expected, across four out of five datasets, the (weak) correlation between confidence and length is negative; our model is somewhat more confident in its prediction on shorter documents. The fifth dataset (AG), shows the opposite trend: confidence is positively correlated with length. This discrepancy might be explained by the nature of the tasks we consider. For instance, IMDB and SST are sentiment analysis datasets, where longer texts might include conflicting evidence and thus be harder to classify. In contrast, AG is a news topic detection dataset, where a conflict between topics is uncommon, and longer documents provide more opportunities to find the topic. Consistency and difficulty Our next criterion for “difficulty” is the consistency of model predictions. Toneva et al. (2019) proposed a notion of “unforgettable” training instances, which once the model has predicted correctly, it never predicts incorrectly for the remainder of training iterations. Such instances can be thought of as “easy” or memorable examples. Similarly, Sakaguchi et al. (2019) defined test instances as “predictable” if multiple simple models predict them correctly. Inspired by these works, we define the criterion of consistency: whether all classifiers in our model agree on the prediction of a given instance, regardless of whether it is correct or not. Table 3 shows Spearman’s ρ correlation between the confidence of the most efficient classifier and this measure of consistency. Our analysis reveals a medium correlation between confidence and consistency across all datasets (0.37 ≤ρ ≤0.47), which indicates that the measure of confidence generally agrees with the measure of consistency. Comparison with hypothesis-only criteria Gururangan et al. (2018) and Poliak et al. (2018) showed that some NLI instances can be solved by only looking at the hypothesis—these were artifacts of the annotation process. They argued that such instances are “easier” for machines, compared to those which required access to the full input, which they considered “harder.” Table 4 shows the correlation between the confidence of each of our classifiers on the SNLI and MNLI dataset with the confidence of a hypothesis-only classifier. Simi6647 Layer SNLI MNLI Hyp.-Only IAC Hyp.-Only IAC 0 0.39 0.14 0.37 0.08 4 0.31 0.25 0.35 0.21 12 0.31 0.31 0.32 0.27 23 0.28 0.32 0.30 0.32 Table 4: Spearman’s ρ correlation between confidence levels for our classifiers (of different layers) on the validation sets of SNLI and MNLI, and two measures of difficulty: hypothesis-only classifier predictions (Hyp.Only) and inter-annotator consensus (IAC). larly to the consistency results, we see that the confidence of our most efficient classifier is reasonably correlated with the predictions of the hypothesisonly classifier. As expected, as we move to larger, more accurate classifiers, which presumably are able to make successful predictions on harder instances, this correlation decreases. Inter-annotator consensus Both NLI datasets include labels from five different annotators. We treat the inter-annotator consensus (IAC) as another measure of difficulty: the higher the consensus is, the easier the instance. We compute IAC for each example as the fraction of annotators who agreed on the majority label, hence this number ranges from 0.6 to 1.0 for five annotators. Table 4 shows the correlation between the confidence of our classifiers with the IAC measure on SNLI and MNLI. The correlation with our most efficient classifiers is rather weak, only 0.08 and 0.14. Surprisingly, as we move to larger models, the correlation increases, up to 0.32 for the most accurate classifiers. This indicates that the two measures perhaps capture a different notion of difficulty. Confidence across labels Figure 5 shows the proportion of instances in our validation set that are predicted with high confidence by our calibrated model (90% threshold) for each dataset, label, and model size. We first note that across all datasets, and almost all model sizes, different labels are not predicted with the same level of confidence. For instance, for AG, the layer 0 model predicts the tech label with 87.8% average confidence, compared to 96.8% for the sports label. Moreover, in accordance with the overall performance, across almost all datasets and model sizes, the confidence levels increase as the models get bigger in size. Finally, in some cases, as we move towards larger models, the gaps in confidence close (e.g., IMDB and SST), although the relative ordering hardly ever changes. Two potential explanations come up when observing these results; either some labels are easier to predict than others (and thus the models are more confident when predicting them), or the models are biased towards some classes compared to others. To help differentiate between these two hypotheses, we plot in Figure 6 the average confidence level and the average F1 score of the most efficient classifier across labels and datasets. The plot indicates that both hypotheses are correct to some degree. Some labels, such as sports for AG and positive for IMDB, are both predicted with high confidence, and solved with high accuracy. In contrast, our model is overconfident in its prediction of some labels (business for AG, positive for SST), and underconfident in others (tech for AG, entailment for MNLI). These findings might indicate that while our method is designed to be globally calibrated, it is not necessarily calibrated for each label individually. Such observations relate to existing concerns regarding fairness when using calibrated classifiers (Pleiss et al., 2017). 7 Related Work Methods for making inference more efficient have received considerable attention in NLP over the years (Eisner and Satta, 1999; Goldberg and Elhadad, 2010, inter alia). As the field has converged on deep neural architecture solutions, most efforts focus on making models smaller (in terms of model parameters) in order to save space as well as potentially speed up inference. In model distillation (Hinton et al., 2014) a smaller model (the student) is trained to mimic the behavior or structure of the original, larger model (the teacher). The result is typically a student that is as accurate as the teacher, but smaller and faster (Kim and Rush, 2016; Jiao et al., 2019; Tang et al., 2019; Sanh et al., 2019). Pruning (LeCun et al., 1990) removes some of the weights in the network, resulting in a smaller, potentially faster network. The basic pruning approach removes individual weights from the network (Swayamdipta et al., 2018; Gale et al., 2019). More sophisticated approaches induce structured sparsity, which removes full blocks (Michel et al., 2019; Voita et al., 2019; Dodge et al., 2019b). Liu et al. (2018) and Fan et al. (2020) pruned deep models by applying dropout to different layers, which allows dynamic control of 6648 Figure 5: Instances with different labels are predicted with different degrees of confidence. Figure 6: Comparing confidence levels and F1 scores of our most efficient classifier across datasets and labels. High confidence by the model is sometimes explained by “easy” classes that are predicted with high F1 (e.g., sports in AG). Other cases might stem from biases of the model which make it overconfident despite the label being harder than other labels (e.g., positive in SST). the speed/accuracy tradeoff of the model without retraining. Our method also allows for controlling this tradeoff with a single training pass, and yields computational savings in an orthogonal manner: by making early exit decisions. Quantization is another popular method to decrease model size, which reduces the numerical precision of the model’s weights, and therefore both speeds up numerical operations and reduces model size (Wr´obel et al., 2018; Shen et al., 2019; Zafrir et al., 2019). Some works introduced methods to allocate fewer resources to certain parts of the input (e.g., certain words), thereby potentially reducing training and/or inference time (Graves, 2016; Seo et al., 2018). Our method also puts less resources into some of the input, but does so at the document level rather than for individual tokens. A few concurrent works have explored similar ideas for dynamic early exits in the transformer model. Elbayad et al. (2020) and Dabre et al. (2020) introduced early stopping for sequence-tosequence tasks (e.g., machine translation). Bapna et al. (2020) modify the transformer architecture with “control symbols” which determine whether components are short-circuited to optimize budget. Finally, Liu et al. (2020) investigated several inference-time cost optimizations (including early stopping) in a multilingual setting. Several computer vision works explored similar ideas to the one in this paper. Wang et al. (2018) introduced a method for dynamically skipping convolutional layers. Bolukbasi et al. (2017) and Huang et al. (2018) learned early exit policies for computer vision architectures, observing substantial computational gains. 8 Conclusion We presented a method that improves the speed/accuracy tradeoff for inference using pretrained language models. Our method makes early exits for simple instances that require less processing, and thereby avoids running many of the layers of the model. Experiments with BERT-large on five text classification and NLI datasets yield substantially faster inference compared to the standard approach, up to 80% faster while maintaining similar performance. Our approach requires neither additional training time nor significant number of additional parameters compared to the standard approach. It also allows for controlling the speed/accuracy tradeoff using a single model, without retraining it for any point along the curve. Acknowledgments The authors thank the members of Noah’s ARK at the University of Washington, the researchers at the Allen Institute for AI, and the anonymous reviewers for their valuable feedback. 6649 References Dario Amodei and Danny Hernandez. 2018. AI and compute. Blog post. Ankur Bapna, Naveen Arivazhagan, and Orhan Firat. 2020. Controlling computation versus quality for neural sequence models. arXiv:2002.07106. Tolga Bolukbasi, Joseph Wang, Ofer Dekel, and Venkatesh Saligrama. 2017. Adaptive neural networks for efficient inference. In Proc. of ICML. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proc. of EMNLP. Raj Dabre, Raphael Rubino, and Atsushi Fujita. 2020. Balancing cost and benefit with tied-multi transformers. arXiv:2002.08614. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entailment challenge. In Proc. of MLCW. Morris H. DeGroot and Stephen E. Fienberg. 1983. The comparison and evaluation of forecasters. Journal of the Royal Statistical Society: Series D (The Statistician), 32(1-2):12–22. Shrey Desai and Greg Durrett. 2020. Calibration of pre-trained transformers. arXiv:2003.07892. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. of NAACL. Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, and Noah A. Smith. 2019a. Show your work: Improved reporting of experimental results. In Proc. of EMNLP. Jesse Dodge, Roy Schwartz, Hao Peng, and Noah A. Smith. 2019b. RNN architecture learning with sparse regularization. In Proc. of EMNLP. Jason Eisner and Giorgio Satta. 1999. Efficient parsing for bilexical context-free grammars and head automaton grammars. In Proc. of ACL. Maha Elbayad, , Jiatao Gu, Edouard Grave, and Michael Auli. 2020. Depth-adaptive transformer. In Proc. of ICLR. Angela Fan, Edouard Grave, and Armand Joulin. 2020. Reducing transformer depth on demand with structured dropout. In Proc. of ICLR. Trevor Gale, Erich Elsen, and Sara Hooker. 2019. The state of sparsity in deep neural networks. arXiv:1902.09574. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. In Proc. of NLP-OSS. Yoav Goldberg and Michael Elhadad. 2010. An efficient algorithm for easy-first non-directional dependency parsing. In Proc. of NAACL. Alex Graves. 2016. Adaptive computation time for recurrent neural networks. arXiv:1603.08983. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neural networks. In Proc. of ICML. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proc. of NAACL. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2014. Distilling the knowledge in a neural network. In Proc. of NeurIPS Deep Learning Workshop. Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, and Kilian Q. Weinberger. 2018. Multi-scale dense networks for resource efficient image classification. In Proc. of ICLR. Heinrich Jiang, Been Kim, Melody Y. Guan, and Maya Gupta. 2018. To trust or not to trust a classifier. In Proc. of NeurIPS. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. TinyBERT: Distilling BERT for natural language understanding. arXiv:1909.10351. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In Proc. of EMNLP. Ludmila I. Kuncheva and Christopher J. Whitaker. 2003. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine learning, 51(2):181–207. Yann LeCun, John S. Denker, and Sara A. Solla. 1990. Optimal brain damage. In Proc. of NeurIPS. Liyuan Liu, Xiang Ren, Jingbo Shang, Xiaotao Gu, Jian Peng, and Jiawei Han. 2018. Efficient contextualized representation: Language model pruning for sequence labeling. In Proc. of EMNLP. Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Haotang Deng, and Qi Ju. 2020. FastBERT: a selfdistilling BERT with adaptive inference time. In Proc. of ACL. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proc. of ACL. Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In Proc. of NeurIPS. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. 6650 Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q. Weinberger. 2017. On fairness and calibration. In Proc. of NeurIPS. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis Only Baselines in Natural Language Inference. In Proc. of ∗SEM. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv:1910.10683. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. WinoGrande: An adversarial winograd schema challenge at scale. arXiv:1907.10641. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In Proc. of EMC2. Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2019. Green AI. arXiv:1907.10597. Minjoon Seo, Sewon Min, Ali Farhadi, and Hannaneh Hajishirzi. 2018. Neural speed reading via skimRNN. In Proc. of ICLR. Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W. Mahoney, and Kurt Keutzer. 2019. Q-BERT: Hessian based ultra low precision quantization of BERT. arXiv:1909.05840. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. of EMNLP. Swabha Swayamdipta, Ankur P. Parikh, and Tom Kwiatkowski. 2018. Multi-mention learning for reading comprehension with neural cascades. In Proc. of ICLR. Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling taskspecific knowledge from BERT into simple neural networks. arXiv:1903.12136. Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J. Gordon. 2019. An empirical study of example forgetting during deep neural network learning. In Proc. of ICLR. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. of NeurIPS. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proc. of ACL. Xin Wang, Fisher Yu, Zi-Yi Dou, Trevor Darrell, and Joseph E. Gonzalez. 2018. SkipNet: Learning dynamic routing in convolutional networks. In Proc. of ECCV. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proc. of NAACL. David H. Wolpert. 1992. Stacked generalization. Neural Networks, 5:241–259. Krzysztof Wr´obel, Marcin Pietro´n, Maciej Wielgosz, Michał Karwatowski, and Kazimierz Wiatr. 2018. Convolutional neural network compression for natural language processing. arXiv:1805.10796. Ofir Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. 2019. Q8BERT: Quantized 8bit BERT. In Proc. of EMC2. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Proc. of NeurIPS. 6651 A Implementation Details We fine-tune both our model and our baselines with dropout 0.1. We run all our experiments on a single Quadro RTX 8000 GPU. Our model is implement using the AllenNLP library (Gardner et al., 2018).14 Our calibration code relies on the implementation of Guo et al. (2017).15 We fine-tune text classification models for 2 epochs and NLI models for 4 epochs. We run ten trials of random search on the validation set for both our model and our baselines to select both a learning rate among {0.00002, 0.00003, 0.00005} and a random seed. For our baselines, we select the highest performing model on the validation set among the ten runs. For our model, we select the one with the highest performance averaged across all thresholds explored (we use 0% and 5% intervals in the range [55%, 100%]) on the validation set. B Validation Results Figure 7 shows the validation results of our experiments. 14https://allennlp.org 15https://github.com/gpleiss/ temperature_scaling Figure 7: Validation accuracy and processing time of our approach (blue line) and our standard baseline (std., green diamond), our efficient baselines (eff., red dots) and our oracle (orange star). Left and higher is better.
2020
593
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6652–6661 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6652 Bootstrapping Techniques for Polysynthetic Morphological Analysis William Lane Charles Darwin University [email protected] Steven Bird Charles Darwin University [email protected] Abstract Polysynthetic languages have exceptionally large and sparse vocabularies, thanks to the number of morpheme slots and combinations in a word. This complexity, together with a general scarcity of written data, poses a challenge to the development of natural language technologies. To address this challenge, we offer linguistically-informed approaches for bootstrapping a neural morphological analyzer, and demonstrate its application to Kunwinjku, a polysynthetic Australian language. We generate data from a finite state transducer to train an encoderdecoder model. We improve the model by “hallucinating” missing linguistic structure into the training data, and by resampling from a Zipf distribution to simulate a more natural distribution of morphemes. The best model accounts for all instances of reduplication in the test set and achieves an accuracy of 94.7% overall, a 10 percentage point improvement over the FST baseline. This process demonstrates the feasibility of bootstrapping a neural morph analyzer from minimal resources. 1 Introduction Polysynthesis represents the high point of morphological complexity. For example, in Kunwinjku, a language of northern Australia (ISO gup), the word ngarriwokyibidbidbuni contains six morphs: (1) ngarri1pl.exclwokwordyiCOMbidREDUPbidbugo.upni PI ‘We were talking as we climbed up’ Example (1) illustrates common features of polysynthesis: fusion, incorporation, and reduplication. Fusion combines multiple grammatical functions into a single morph, leading to large morph classes, and challenging the item-and-arrangement leanings of finite state morphology. Incorporation presents a modelling challenge because rule-based methods are unable to enumerate an open class, and machine learning methods need to learn how to recognize the boundary between contiguous large or open morph classes. Reduplication is also a challenge because it copies and prepends a portion of the verb root to itself, requiring a nonlinear or multi-step process. Tackling these phenomena using finite state transducers (FSTs) involves a combination of technical devices whose details depend on subtleties of the morphological analysis (cf. Arppe et al., 2017). There remains a need for more investigation of polysynthetic languages to deepen our understanding of the interplay between the options on the computational side, and the most parsimonious treatment on the linguistic side. Morphological complexity leads to data sparsity, as the combinatorial possibilities multiply with each morpheme slot: most morphologically complex words will be rare. Furthermore, many morphologically complex languages are also endangered, making it difficult to collect large corpora. Thus, polysynthetic languages challenge existing ways of building tools and applications for the communities that speak these languages. In this work we investigate Kunwinjku, spoken by about 2,000 people in West Arnhem in the far north of Australia. Members of the community have expressed interest in using technology to support language learning and literacy development. Thus, we face the challenge of developing useful language technologies on top of robust models, with few resources and in a short space of time. We envisage morphologically-aware technologies including dictionary interfaces, spell checkers, text autocompletion, and tools for language learning (cf. Littell et al., 2018). This paper is organized as follows. We begin by reviewing previous work in finite state morphology, 6653 low resource morph analysis, neural approaches to morph analysis, and data augmentation for morphological reinflection (Sec. 2). Next, we describe our existing finite state model for Kunwinjku verbs (Sec. 3). In Section 4 we present a neural approach which addresses gaps in the previous model, including the ability to analyze reduplication and to exploit distributional information. Next we discuss our evaluation metrics and our handling of syncretism and ambiguity (Sec. 5). Finally, the results are presented in Section 6, including a discussion of how well the neural models address the shortcomings of the FST model. Our contributions include: (a) a robust morphological analyzer for verbs in a polysynthetic language; (b) a method for augmenting the training data with complex, missing structure; and (c) a technique for scoring the likelihood of generated training examples. 2 Background and Related Work Finite state transducers (FSTs) are a popular choice for modelling the morphology of polysynthetic languages. Several toolkits exist, including XFST, Foma, and HFST (Beesley and Karttunen, 2003; Hulden, 2009; Lind´en et al., 2013). Each one is an optimized implementation of the finite state calculus (Kaplan and Kay, 1994), providing additional support for morphosyntactic and morphophonological processes. Most recent work on computational modelling of morphologically rich languages is built on the foundation of these tools (Arppe et al., 2017; Littell, 2018; Andriyanets and Tyers, 2018; Chen and Schwartz, 2018; Cardenas and Zeman, 2018). As a case in point, we applied Foma in the analysis of the morphology of Kunwinjku verbs, but ran into difficulties accounting for outof-vocabulary (OOV) items in open morph classes. We also stopped short of addressing complex features like reduplication and verbal compounding, for technical reasons related to the expressiveness of FSTs (cf. Lane and Bird, 2019). Recently, neural models have gained popularity for morphological processing because they address some of the weakness of FSTs: subword modeling shows an ability to remain robust in the face of out-of-vocabulary items, and recurrent neural architectures with attention have shown a capacity to learn representations of context which allow the model to incorporate the notion of long-distance dependencies (Bahdanau et al., 2014). Neural morphological analyzers can be developed from training data generated by an FST. These analyzers are more robust, handling variation, out-of-vocabulary morphs, and unseen tag combinations (Micher, 2017; Moeller et al., 2018; Schwartz et al., 2019). They provide 100% coverage, always providing a “best guess” analysis for any surface form. Of course, FSTs can be modified to accommodate exceptions and OOV morphs, but this requires explicit modelling and usually does not achieve the robustness of neural analyzers (Schwartz et al., 2019). Anastasopoulos and Neubig (2019) found that they could augment their training set by hallucinating new stems, increasing accuracy on their test set by 10 percent. This method involved substituting random characters from the target language’s alphabet into the region identified by alignment as the probable root. For the sake of cross-lingual generalizability, their method does not consider language-specific structure. The task of morphological analysis, mapping an inflected form to its root and grammatical specifications, is similar to the task of machine transliteration, mapping a sequence of words or characters from source to target language without reordering. For example in Kunwinjku, consider the segmentation and gloss of the verb karridjalbebbehni: (2) karri12adjaljustbebbehDISTRni sit.NP ‘Let’s just sit down separately’ [E.497] Since the process of segmenting and glossing the verb does not contain any reorderings, the mapping of surface to glossed forms can be viewed as transliteration. 3 A Finite State Model of Kunwinjku Finite state transducers have long been viewed as an ideal framework to model morphology (Beesley and Karttunen, 2003). They are still a popular choice for low-resource polysynthetic languages (cf. Chen and Schwartz, 2018; Lachler et al., 2018). Here we summarize some features of Kunwinjku and describe the finite state implementation. 3.1 Features of Kunwinjku Kunwinjku is a polysynthetic agglutinating language, with verbs having up to 15 affix slots (Fig. 1). Morphs combine in a way that is “almost lego-like” (Evans, 2003; Baker and Harvey, 2003). 6654 −12 −11 −10 (−9) (−8) (−7) (−6) (−5) (−4) (−3) (−2) (−1) 0 +1 +2 Tense Subject Object Directional Aspect Misc1 Benefactive Misc2 GIN BPIN NumeroSpatial Comitative Verb root RR TAM Figure 1: Verbal affix positions in Kunwinjku. Regions where indices share a cell ([−12,−10], [+1,+2]) indicate potentially fused segments. Slot indices in parentheses indicate optionality. Adapted from (Evans, 2003, Fig 8.1). We implement morphotactics and morphophonology as separate stages, following usual practice (Fig. 2). However, this is not conducive to modelling noun incorporation, valence-altering morphology, fusion, or reduplication, all typical phenomena in polysynthetic languages. Kunwinjku has two kinds of noun incorporation. General incorporable nouns (GIN) are a closed class, manifesting a variety of grammatical roles (3). Body part incorporable nouns (BPIN) are an open class, restricting the scope of the action (4). (3) nga1mkaknightkeleminj fear.P ‘I was afraid at night’ (4) nga1mbidhandkeleminj fear.P ‘I was afraid for my hand’ [E.458] The open class BPIN occupy slot −3 and will be adjacent to the verb root whenever slots −2 and −1 are empty, as is common. With adjacent open class slots, Kunwinjku opens up the possibility of there being contiguous OOV morphs. In Kunwinjku there is no template to help distinguish members of these adjacent classes, thus creating a novel challenge for predicting morph boundaries. While transitivity of the verb is lexically defined, there are three morph classes which signal valency change: the benefactive (BEN), comitative (COM), and reflexive (RR). More details about the respective function of these morphs is given in Lane and Bird (2019), but here it suffices to say their presence in a verb makes resolving valency impossible without wider sentential context. This impacts the FST modelling, as we are unable to restrict possible illegal analyses on this basis, which results in overgeneration. Morphological fusion can lead to a proliferation of morphs and analyses. In Kunwinjku, there are no fewer than 157 possibilities for the first slot of the verb, fusing person and number (for both subject and object) along with tense. We find that this fusion affects decisions around tokenization of the data in preparation for training the seq2seq model (Sec. 4.2). morphotactic transducer morphophonological transducers karribimbom karriˆbimˆbuˆ~om [V][1pl.incl.3sg.PST][GIN.bim]bu[PP] Analyzed form: Intermediate form: Surface form: Figure 2: The high-level structure of the Kunwinjku finite state transducer. Analyzed forms are mapped to surface forms (and vice versa) through the composition of morphotactic and morphophonological transducers. Most of the world’s languages employ reduplication productively for diverse purposes (Rubino, 2005). It is a common feature of polysynthetic languages in particular. While modelling reduplication using FSTs is possible, the general consensus is that modelling partially reduplicative processes explode the state space of the model, and are burdensome to develop (Culy, 1985; Roark et al., 2007; Dras et al., 2012). For these reasons, the Kunwinjku FST model does not include an implementation of the language’s complex reduplication system. In Kunwinjku, there are three types of verbal reduplication: iterative, inceptive, and extended. Each type of reduplication has 1–3 (CV) templates which can be applied to the verb root to express the semantics associated with each type. In Section 4.4 we discuss an approach to ensure that the neural model handles Kunwinjku’s complex reduplication system. 3.2 Evaluating the FST We establish a baseline by scoring the FST on a set of n = 304 inflected verbs. The data was collected from the Kunwinjku Bible (which targets a modern vernacular), a language primer (Etherington and Etherington, 1998), and a website (Bininj Kunwok Language Project, 2019). The data was glossed in consultation with language experts. We define coverage as number of analysed forms, and accuracy as the number of correctly analyzed forms, both as a fraction of n. We define precision 6655 Accuracy Coverage Precision FST 84.4 88.5 95.4 Figure 3: All-or-nothing accuracy and coverage of the Kunwinjku FST Analyzer on the test set of 304 inflected verbs. Error Class % of Error Reduplication 28.9 TAM Inflection 28.5 OOV root 26.3 OOV inc. nominals 13.2 Alternation 2.2 Figure 4: Error analysis of Lane and Bird (2019)’s FST model of Kunwinjku verbs shows 5 classes of error and the percent of the total error attributed to each class. as the number of correctly analysed forms as a fraction of the number of analysed forms. We distinguish accuracy and precision because the ability of a model to withhold prediction in case of uncertainty is useful in certain application contexts. The results of the evaluation show that while the FST is fairly high-precision, its accuracy is limited by the imperfect coverage of verb stems in the lexicon (Fig. 3). The FST relies on a lexicon to provide analyses for inflected forms, and when it comes across OOV morphs, or verb stems modified by processes like reduplication, it fails to return an analysis. We sort the coverage issues into classes, and remark that the largest source of error comes from reduplication, followed by variation in tense/aspect/mood (TAM) inflection, OOV stems, OOV incorporated nominals, and exceptions to the d-flapping alternation rule (Fig. 4). We address each of these problems in the following sections. 4 Methods In this section we discuss the approach which leverages an incomplete FST to produce a more robust neural morphological analyzer for Kunwinjku. Those steps include generating training pairs from an FST, tokenizing the data, resampling from the dataset to simulate distributional signal, hallucinating missing structures into the dataset, and training a neural encoder-decoder model on the resampled data. 4.1 Data generation from an FST Given our low resource setting, training a neural encoder-decoder model like those used in neural machine translation (NMT) is not possible without augmenting what resources we do have. Following the established template of recent work on neural morphological analysis for low resource polysynthetic languages (Micher, 2017; Moeller et al., 2018; Schwartz et al., 2019) we use the FST model to generate morphotactically valid pairs of surface and analyzed verbs. For the purpose of training the base neural model, we adapted the Foma tool to randomly generate 3,000,000 surface/analysis pairs from the FST (see Fig. 6 for an example of a tokenized pair). An automatic process removed duplicates, leaving us with 2,666,243 unique pairs which we partitioned into an .8/.1/.1 train/dev/test split. In Schwartz et al. (2019)’s work on modelling complex nouns in Yupik, they generate a training set which exhaustively pairs every Yupik noun root with every inflectional suffix, regardless of the resulting semantic fidelity. In our case, it was not feasible to exhaustively generate the training data, as it would have led to 4.9×1012 instances (Fig. 5). In effect, the training set represents .00004% of the space over which we seek to generalize. 4.2 Tokenization To prepare the data for training a seq2seq model, we first collect the glossed inflected verb forms, perform tokenization, and organize them into sourcetarget pairs. We chose a tokenization scheme which treats graphemes as atomic units. Morph labels are also treated mostly as atomic units, with the exception being for fused labels which we break into their individual linguistic components (Fig. 6). For example the pronominal morph in Kunwinjku can simultaneously express both subject and object, as well as tense. Consider the pronominal prefix kabenbene- which we gloss as 3sg.3ua.nonpast and tokenize as [ 3sg . 3ua . nonpast ]. Choosing to break up labels in the fused morphological slots prevents an unnecessary proliferation of entries in the target vocabulary, as individual units like 3sg, 3ua, and past can be shared by multiple pronominals. Our choice to tokenize the source forms and verb root strings at the grapheme level reflects our desire to loosen the model’s vocabulary such that it is 6656 TSO DIR ASP MSC1 BEN MSC2 GIN BPIN COM root RR TAM Total 157 x 3 x 2 x 24 x 2 x 4 x 78 x 32 x 2 x 541 x 2 x 5 = 4.9x1012 Figure 5: An estimate for all morphotactically valid sequences covered by the Kunwinjku FST equipped to handle variation at the orthographic level, and possible OOV stems. 4.3 Simulating distributional information Generating from an FST at random fails to capture valuable information about the distribution of morphs. For example in Kunwinjku, body part incorporable nouns (BPIN) can occur adjacent to the verb root. Both categories are open class, meaning that there is a high likelihood in the low-resource setting that either or both are out-of-vocabulary. How then does the analyzer decide where to place the boundary? Perhaps the entire sequence is a single out-of-vocabulary root. Our intuition is that knowing the likelihood of co-occurrence for two analysis tags can provide signal to help disambiguate. Some morph sequences are inevitably more frequent than others, and we would like to represent that information in the training set. To this end, we propose a method for simulating distributional information in the training set. First, we want to score any analyzed form, giving higher scores to forms that contain more likely sequences. We define M as the sequence of morph tags which make up an analysis, where mi is the morph tag at index i. The scoring function is defined as follows: (5) score(M) = 1 n nP i log P(mi, mi+1) The joint probability of adjacent tags is estimated from a corpus of unannotated text, here, selected books from the Kunwinjku Bible. Everything the existing FST can analyse as a verb is considered to be a verb, and is used to calculate the joint probability table. The training set is tagged with the FST1, and ranked according to the scoring function. We split the sorted data into buckets defined by their morphotactic likelihood, and then sample from them according to a Zipf distribution. The effect is that more probable sequences are more likely to occur in the training data than less likely examples, thus approximating the distribution of morphotactic structure we would expect to see in a natural corpus. 1By using an FST with imperfect recall we are not capturing true distributional information; it is simply a heuristic. 4.4 Hallucinating reduplicative structure One shortcoming of the Kunwinjku FST model is that it does not account for reduplicative structure, due to the complexity of modelling recursive structure in the linear context of finite state machines (Culy, 1985; Roark et al., 2007). As noted previously, reduplication is responsible for 28.9% of the FST’s coverage error when evaluated on the test set of inflected verbs. If reduplication is not modeled by the FST, then reduplication will also not be represented in the training set generated by that FST. We posit that if data hallucination has been shown to improve performance in the language-agnostic setting (Anastasopoulos and Neubig, 2019; Silfverberg et al., 2017), than it is likely that linguistically-informed hallucination can provide a similar reinforcement in Kunwinjku. In line with this, we developed an extension to the data generation process which hallucinates reduplicative structure into a subset of the training data. Kunwinjku has three main types of partial verbal reduplication signaling iterative, inceptive, and extended meaning. Moreover, each type of reduplication can have more than one CV template, depending on which paradigm the verb belongs to. Figure 7 documents the three types of reduplication, and serves as the template for the reduplicative structure hallucinator. First, the hallucinator module samples n% of the FST-generated pairs and strips away the affixes to isolate the root. For each root, one of the three reduplication types (iterative, inceptive, or extended) is selected at random, and the root is matched against the available CV templates. The longest pattern which matches the root is selected, and the pattern-matching portion of the root is copied and prepended to the root. Both the surface and analyzed form are updated to reflect the change, and the new training pairs are appended to the original list of FST-generated pairs. 4.5 Training We trained an encoder-decoder model on the dataset of 2,114,710 surface/analyzed form pairs (the Base model). We then hallucinate reduplication into 8% of the Base data, and 6657 b i k a nj ng u n e ng −> [ 3 sg . 3Hsg . PST ] [ BPIN ] ng u [ PP ] Figure 6: An example of a tokenized source/target training pair, where we treat source graphemes, target labels, fused target label components, and verb root graphemes as atomic units. Type Pattern(s) Unreduplicated Verb Reduplicated Verb Semantic Effect on Verb (V) Iterative CVC dadjke = cut dadj-dadjke = cut to pieces Doing V over and over again CV(C)CV(h) bongu = drink bongu-bongu = keep drinking CVnV(h) re = go rengeh-re = go repeatedly Inceptive CV(n)(h) yame = spear (sth) yah-yame = try (and fail) to spear (sth) Failed attempt to do V durnde = return durnh-durnde = start returning Starting to do V Extended CVC(C) ∥ men djordmen = grow djordoh-djordmen = grow all over the place Doing V all over the place CVC(C) ∥ me wirrkme = scratch wirri-wirrkme = scratch all over Figure 7: Reduplication in Kunwinjku has three forms, and each form has its own CV templates defining how much of the verb is captured and copied. In the case where we’ve used the form X ∥ Y, we mean that pattern X is the reduplicated segment if found in the context of Y. Figure adapted from (Evans, 2003). combine that hallucinated data to the base training data set (the Base+halluc[...] models). The model setup is similar to the one described in (Schwartz et al., 2019). We use MarianNMT: a fast, open-source toolkit which implements neural models for machine translation (Junczys-Dowmunt et al., 2018). We used a shallow attentional encoderdecoder model (Bahdanau et al., 2014) using the parameters described in (Sennrich et al., 2016): the encoder and decoder each have 1 hidden layer of size 1024. We use cross-validation as the validation metric, set dropout to .2 on all RNN inputs, and enable early stopping to avoid overfitting. We use the same setup and parameters for all NMT models mentioned in this paper. A full accounting of the MarianNMT settings used can be seen in the Appendix. 5 Evaluation of the Neural Models We begin by reporting the performance of the neural models in terms of coverage, accuracy, and precision, so that they can be compared with the evaluation of the FST model, described in Section 3.2. Additionally, we measure the performance of the neural models in terms of precision (P), recall (R), and F1 on the morph level: For each morph tag in the gold target test set, we calculate P, R, and F1, and then calculate the macro-average P, R, and F1 across all tags in the test set (Fig. 9). This method is more granular than all-or-nothing accuracy over the entire translated sequence, and allows us to get a better picture of how the models are doing on the basis of individual tags. We observed an issue with syncretic ambiguity which complicates the evaluation process (also noted by Schwartz et al. 2019; Moeller et al. 2018). For example, the pronominal prefix kabindi- can be glossed: [3ua.3ua.nonpast], or [3pl.3ua.nonpast], or [3ua.3pl.nonpast], or [3pl.3pl.nonpast]. Here, the pronominal expresses both the subject and object, and is not explicit whether that subject or object is the 3rd person dual or plural, in any of four possible combinations. The disambiguation cannot be resolved at the level of the isolated verb. Our initial experiment with the base data set achieved 100% coverage and 68.3% accuracy on the test set. When confronted by the same problem, Moeller et al. (2018) decided to collapse ambiguous tags into an underspecified meta-tag. For example, for the Kunwinjku data, we might collapse the four tags above into [3pl.3pl.nonpast]. However, doing so results in a potential loss of information. Given the wider sentential context, the pronominal could be possibly be disambiguated, so long as the distinction is preserved and all equally-valid analyses are returned. Further, as Schwartz et al. (2019) point out, in the Yupik language it is possible for this ambiguity to exist across other categories which are not easily collapsed. In Kunwinjku, an example of this would be the pronominals [1sg.2.past] and [3sg.past] which differ in terms of number and valency, and yet share the same null surface form. Their differences are such that they can not be easily collapsed into a single meta-tag. Therefore we do not penalize the model for producing any variation of equally valid analyses given the surface form, and for each model we adjust the evaluation for syncretism in a post-processing step. 6658 6 Results and Discussion All of the neural models outperform the FST in terms of accuracy and coverage (Fig. 8). However, the FST is more precise, and this may be useful in certain application contexts. The best model is Base+halluc+resample, which improves on the FST by 10.3 percentage points. On the morphlevel, we see that the neural models containing the hallucinated reduplication data outperform the base neural model (Fig. 9). Acc Cov Precision FST 84.4 88.5 95.4 Base 89.1 100 89.1 Base+halluc 93.7 100 93.7 Base+halluc+resample 94.7 100 94.7 Figure 8: All-or-nothing accuracy and coverage of the three morphological analyzer models Precision Recall F1 Base 88.8 89.9 89.0 Base+halluc 91.6 92.6 91.8 Base+halluc+resample 93.7 93.6 93.4 Figure 9: Morph-level performance of shallow neural sequence models. Macro P/R/F1 across all morph tags. We posited that the difficulties encountered by the FST model—namely reduplication, out-ofvocabulary items, and spelling variation—could be at least partially addressed by training a neural model on character and tag sequences, and hallucinating instances of reduplication into the training set. For the most part, this held true, as we see gains across all error classes (cf. Sec. 3.2). Here we report performance with respect to the three largest error classes: reduplication, OOV verbs, and OOV nouns. 6.1 Reduplication As expected, neither the FST nor the Base neural model succeeds in recognizing reduplication. It would be impossible, as the REDUP tag does not appear in either of their vocabularies. The Base+halluc model’s performance gain over the Base model can be accounted for entirely by the fact that it achieved 100% recall of reduplicative structure. Precision, on the other hand was 57.9%. Looking at the errors, we find that the imprecise predictions were all applied to instances about which the system was already wrong in previous Unseen Verbs Base+halluc+resample ✓/ wobekkang [GIN]bekka  ngakohbanjminj [GIN][REDUP]me  ngarrukkendi dukkendi ✓ kamenyime [GIN]yime  yimalngdarrkiddi darrke[PERSIST]  ngamdolkkang [DIR][GIN]ka  dolkkang [GIN]ka  karrukmirri dukmirri ✓ ngurrimirndemornnamerren mornname ✓ Unseen GIN/BPIN/ASP Base+halluc+resample ✓/ kannjilngmarnbom [GIN]  yibenkangemarnbom [REDUP]  kankangemurrngrayekwong [GIN]  kankangemurrngrayekwong [BPIN] ✓ kankangemurrngrayekwong [REDUP]  kankangemarnbom [REDUP]  ngarribangmemarnbuyi [BPIN]  yimalngdarrkiddi [GIN][REDUP]  Figure 10: Column 1 shows the list of verbs and nouns (in bold) which are are unseen in the FST lexicon. Column 2 is the Base neural model’s prediction covering the character sequence corresponding to the unseen item. Column 3 indicates whether the neural model’s analysis of the morph is correct. models, meaning that the impact of reduplicative hallucination between models was only positive. In the Base+halluc+resample model, recall of reduplicative structure was also 100%, and precision increased slightly to 58.8%. 6.2 Discovering New Lexical Items The neural models correctly identify some unseen verb stems, but still show room for improvement. We observe a tendency across all neural models to predict verb stems which have been seen in training, and which are also a substring of the observed unknown root. For example, the training set does not contain any verbs with the root dolkka, but it shows up 3 times in the test set. The analyses of all dolkka-rooted verbs were the same in both the Base+halluc and Base+halluc+resample models: they propose ka, a known root from the training set, and presume dolk- to be an incorporable noun2. Figure 10 shows a sample of OOV verb stems and nouns from the test set. In the unseen verbs table, this behavior of preferring previously observed verb stems is the cause of error in every case. Further difficulty comes in distinguishing between general (GIN) and body-part (BPIN) incorporated noun classes. The low rate of success in positing unknown incorporated nouns is, in 2Possibly by virtue of its orthographic proximity to bolk-, a common general incorporable noun which means “land.” 6659 large part, attributed to the fact that the large GIN and open BPIN classes often occur adjacent to each other and to the root. The neural model has difficulty making useful predictions when multiple morphs in this region are previously unobserved. Overall, the Base+halluc+resample model correctly posited 33% of unseen stems, and 12.5% of unseen nouns from the FST error analyses. 6.3 Impact of distributional information This technique to approximate distributional information led to a small improvement in overall accuracy, and in tag-level P/R/F1. We had expected that this information might help the neural models learn something about the relative frequencies of GINs or BPINs, which could help make decisions about how to draw the boundary between unseen stems and unseen incorporated nominals. Instead, we saw distributive information helped to disambiguate the boundaries between morph classes with fewer members. One representative example is the case of yikimang, whose root is kimang. Before resample, the neural models interpret the yi- as the comitative prefix yi-, and injects a spurious COM tag into the analysis. After resample, it correctly omits the COM tag, interpreting yi- as the 2nd person singular pronominal. In the unfiltered FST-generated training data, COM occurs in 53% of instances. In the resampled data, it occurs in 22% of instances. When all morph labels are equally likely to occur, the model is just as likely to predict any morph label compatible with the character sequence. Resampling the training data according to a more realistic distribution leads to stronger morph transition priors, which tip the scale in favor of the analysis with a more likely tag sequence. 7 Conclusion We have shown that complex features of polysynthetic morphology, such as reduplication and distributional morphotactic information, can be simulated in the dataset and used to train a robust neural morphological analyzer for a polysynthetic language. In particular, we showed that a robust neural model can be bootstrapped in a relatively short space of time from an incomplete FST. This work represents a successful first iteration of a process whereby the morphological model can be continually improved. Indeed, the concept of bootstrapping a model implies an iterative development story where much of the scaffolding used in early efforts will eventually fall away. For example, once the bootstrapped model has been used to tag verbs containing reduplication, we can confirm the model’s high-confidence predictions and retrain. In this second iteration, we may find that we no longer need to hallucinate reduplication because it is sufficiently represented in the new training set. Similarly, once we have applied the complete neural model to a corpus of natural text, we will no longer need to approximate distributional information. For researchers developing robust morphological analyzers for low resource, morphologically complex languages, this work represents a template of model development which is well-suited for the context. Producing a viable morphological analyzer is the first step towards building improved dictionary search interfaces, spell-checking tools, and computer-assisted language learning applications for communities who speak low-resource languages. The pattern of training robust systems on data that has been augmented by the knowledge captured in symbolic systems could be applied to areas outside of morphological analysis, and is a promising avenue of future exploration. Acknowledgments We are grateful for the support of the Warddeken Rangers of West Arnhem. This work was covered by a research permit from the Northern Land Council, and was sponsored by the Australian government through a PhD scholarship, and grants from the Australian Research Council and the Indigenous Language and Arts Program. We are grateful to four anonymous reviewers for their feedback on an earlier version of this paper. 6660 References Antonios Anastasopoulos and Graham Neubig. 2019. Pushing the limits of low-resource morphological inflection. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 984–996, Hong Kong, China. Association for Computational Linguistics. Vasilisa Andriyanets and Francis Tyers. 2018. A prototype finite-state morphological analyser for Chukchi. In Proceedings of the Workshop on Computational Modeling of Polysynthetic Languages, pages 31–40, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Antti Arppe, Christopher Cox, Mans Hulden, Jordan Lachler, Sjur N Moshagen, Miikka Silfverberg, and Trond Trosterud. 2017. Computational Modeling of Verbs in Dene Languages: The Case of Tsuut’ina. Working Papers in Athabaskan (Dene) Languages, pages 51–69. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Brett Baker and Mark Harvey. 2003. Word Structure in Australian Languages. Australian Journal of Linguistics, 23:3–33. Kenneth R Beesley and Lauri Karttunen. 2003. Finitestate morphology: Xerox tools and techniques. CSLI, Stanford. Bininj Kunwok Language Project. 2019. Bininj Kunwok: kunwok dja mankarre kadberre–our language, our culture. https://bininjkunwok.org.au/. Accessed: 2019-10-10. Ronald Cardenas and Daniel Zeman. 2018. A morphological analyzer for shipibo-konibo. In Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 131–139. Emily Chen and Lane Schwartz. 2018. A morphological analyzer for St. Lawrence Island/Central Siberian Yupik. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation. Christopher Culy. 1985. The complexity of the vocabulary of Bambara. In The Formal Complexity of Natural Language, pages 349–357. Springer. Mark Dras, Franc¸ois Lareau, Benjamin B¨orschinger, Robert Dale, Yasaman Motazedi, Owen C Rambow, Myfany Turpin, and Morgan Elizabeth Ulinski. 2012. Complex predicates in arrernte. Steven Etherington and Narelle Etherington. 1998. Kunwinjku Kunwok: A Short Introduction to Kunwinjku Language and Society: with Extra Notes on Gundjeihmi. Gunbalanya: Kunwinjku Language Centre. Nicholas Evans. 2003. A Pan-dialectal Grammar of Bininj Gun-Wok (Arnhem Land): Mayali, Kunwinjku and Kune. Pacific Linguistics. Australian National University. Mans Hulden. 2009. Foma: a finite-state compiler and library. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 29–32. Association for Computational Linguistics. Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, et al. 2018. Marian: Fast neural machine translation in c++. arXiv preprint arXiv:1804.00344. Ronald M Kaplan and Martin Kay. 1994. Regular models of phonological rule systems. Computational linguistics, 20(3):331–378. Jordan Lachler, Lene Antonsen, Trond Trosterud, Sjur Moshagen, and Antti Arppe. 2018. Modeling Northern Haida verb morphology. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation. William Lane and Steven Bird. 2019. Towards A Robust Morphological Analyzer for Kunwinjku. In Proceedings of ALTA 2019. None yet. Krister Lind´en, Erik Axelson, Senka Drobac, Sam Hardwick, Juha Kuokkala, Jyrki Niemi, Tommi A Pirinen, and Miikka Silfverberg. 2013. Hfst—a system for creating nlp tools. In International workshop on systems and frameworks for computational morphology, pages 53–71. Springer. Patrick Littell. 2018. Finite-state morphology for kwak’wala: A phonological approach. In Proceedings of the Workshop on Computational Modeling of Polysynthetic Languages, pages 21–30, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Patrick Littell, Anna Kazantseva, Roland Kuhn, Aidan Pine, Antti Arppe, Christopher Cox, and MarieOdile Junker. 2018. Indigenous language technologies in canada: Assessment, challenges, and successes. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2620–2632. Jeffrey Micher. 2017. Improving coverage of an inuktitut morphological analyzer using a segmental recurrent neural network. In Proceedings of the 2nd Workshop on the Use of Computational Methods in the Study of Endangered Languages, pages 101–106. 6661 Sarah Moeller, Ghazaleh Kazeminejad, Andrew Cowell, and Mans Hulden. 2018. A neural morphological analyzer for arapaho verbs learned from a finite state transducer. In Proceedings of the Workshop on Computational Modeling of Polysynthetic Languages, pages 12–20. Brian Roark, Richard Sproat, and Richard William Sproat. 2007. Computational approaches to morphology and syntax, volume 4. Oxford University Press. Carl Rubino. 2005. Reduplication: Form, function and distribution. Studies on reduplication, pages 11–29. Lane Schwartz, Emily Chen, Benjamin Hunt, and Sylvia Schreiner. 2019. Bootstrapping a Neural Morphological Analyzer for St. Lawrence Island Yupik from a Finite-State Transducer. In Proceedings of the Workshop on Computational Methods for Endangered Languages, pages 87–96. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Edinburgh neural machine translation systems for wmt 16. arXiv preprint arXiv:1606.02891. Miikka Silfverberg, Adam Wiemerslage, Ling Liu, and Lingshuang Jack Mao. 2017. Data augmentation for morphological reinflection. In Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection, pages 90–99, Vancouver. Association for Computational Linguistics. Appendix We provide the MarianNMT configuration settings used for all neural models in this work. --type amun --dim-vocabs 600 500 --mini-batch-fit -w 3500 --layer-normalization --dropout-rnn 0.2 --dropout-src 0.1 --dropout-trg 0.1 --early-stopping 5 --valid-freq 10000 --save-freq 10000 --disp-freq 1000 --valid-metrics cross-entropy --overwrite --keep-best --seed 1111 --exponential-smoothing --normalize=1 --beam-size=12 --quiet-translation
2020
594
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6662–6671 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6662 Coupling Distant Annotation and Adversarial Training for Cross-Domain Chinese Word Segmentation Ning Ding1,2 , Dingkun Long2, Guangwei Xu2, Muhua Zhu2, Pengjun Xie2, Xiaobin Wang2, Hai-Tao Zheng1∗ 1Tsinghua University, China 2Alibaba Group {dingn18}@mails.tsinghua.edu.cn, {zhumuhua}@gmail.com, {dingkun.ldk,kunka.xgw}@alibaba-inc.com, {chengchen.xpj,xuanjie.wxb}@alibaba-inc.com, {zheng.haitao}@sz.tsinghua.edu.cn Abstract Fully supervised neural approaches have achieved significant progress in the task of Chinese word segmentation (CWS). Nevertheless, the performance of supervised models tends to drop dramatically when they are applied to outof-domain data. Performance degradation is caused by the distribution gap across domains and the out of vocabulary (OOV) problem. In order to simultaneously alleviate these two issues, this paper proposes to couple distant annotation and adversarial training for crossdomain CWS. For distant annotation, we rethink the essence of “Chinese words” and design an automatic distant annotation mechanism that does not need any supervision or pre-defined dictionaries from the target domain. The approach could effectively explore domain-specific words and distantly annotate the raw texts for the target domain. For adversarial training, we develop a sentence-level training procedure to perform noise reduction and maximum utilization of the source domain information. Experiments on multiple realworld datasets across various domains show the superiority and robustness of our model, significantly outperforming previous state-ofthe-art cross-domain CWS methods. 1 Introduction Chinese is an ideographic language and lacks word delimiters between words in written sentences. Therefore, Chinese word segmentation (CWS) is often regarded as a prerequisite to downstream tasks in Chinese natural language processing. This task is conventionally formalized as a characterbased sequence tagging problem (Peng et al., 2004), where each character is assigned a specific label to denote the position of the character in a word. With the development of deep learning techniques, recent years have also seen increasing interest in applying neural network models onto CWS (Cai ∗Corresponding author Figure 1: Different word distributions for the newswire domain and the medical domain. and Zhao, 2016; Liu et al., 2016; Cai et al., 2017; Ma et al., 2018). These approaches have achieved significant progress on in-domain CWS tasks, but they still suffer from the cross-domain issue when they come to processing of out-of-domain data. Cross-domain CWS is exposed to two major challenges: 1) Gap of domain distributions. This is a common issue existing in all domain adaptation tasks. Source domain data and target domain data generally have different distributions. As a result, models built on source domain data tend to degrade performance when they are applied to target domain data. Generally, we need some labeled target domain data to adapt source domain models, but it is expensive and time consuming to manually craft such data. 2) Out of vocabulary (OOV) problem, which means there exist some words in the testing data that never occur in the training data. Source domain models have difficulties in recognizing OOV words since source domain data contains no information on the OOVs. Figure 1 presents examples to illustrate the difference between the word distributions of the newswire domain and the medical domain. Segmenters built on the newswire domain have very limited information to segment domain-specific words like “溶菌酶(Lysozyme)”. Previous approaches to cross-domain CWS mainly fall into two groups. The first group aims to attack the OOV issue by utilizing predefined dictionaries from the target domain to facilitate 6663 cross-domain CWS (Liu et al., 2014; Zhao et al., 2018; Zhang et al., 2018), which are apt to suffer from scalability since not all domains possess predefined dictionaries. In other words, these methods are directly restricted by external resources that are available in a target domain. Studies in the second group (Ye et al., 2019) attend to learn target domain distributions like word embeddings from unlabeled target domain data. In this approach, source domain data is not fully utilized since the information from source domain data is transferred solely through the segmenter built on the data. In this paper, we propose to attack the aforementioned challenges simultaneously by coupling the techniques of distant annotation and adversarial training. The goal of distant annotation is to automatically construct labeled target domain data with no requirement for human-curated domain-specific dictionaries. To this end, we rethink the definition and essence of “Chinese words” and develop a word miner to obtain domain-specific words from unlabeled target domain data. Moreover, a segmenter is trained on the source domain data to recognize the common words in unlabeled target data. This way, sentences from the target domain are assigned automatic annotations that can be used as target domain training data. Although distant annotation could provide satisfactory labeled target domain data, there still exist annotation errors that affect the final performance. To reduce the effect of noisy data in automatic annotations in target domain data and make better use of source domain data, we propose to apply adversarial training jointly on the source domain dataset and the distantly constructed target domain dataset. And the adversarial training module can capture deeper domain-specific and domain-agnostic features. To show the effectiveness and robustness of our approach, we conduct extensive experiments on five real-world datasets across various domains. Experimental results show that our approach achieves state-of-the-art results on all datasets, significantly outperforming representative previous works. Further, we design sufficient subsidiary experiments to prove the alleviation of the aforementioned problems in cross-domain CWS. 2 Related Work Chinese Word Segmentation Chinese word segmentation is typically formalized as a sequence tagging problem. Thus, traditional machine learning models such as Hidden Markov Models (HMMs) and Conditional Random Fields (CRFs) are widely employed for CWS in the early stage (Wong and Chan, 1996; Gao et al., 2005; Zhao et al., 2010). With the development of deep learning methods, research focus has been shifting towards deep neural networks that require little feature engineering. Chen et al. (2015) are the first that use LSTM (Hochreiter and Schmidhuber, 1997) to resolve long dependencies in word segmentation problems. Since then, the majority of efforts is building end-to-end sequence tagging architectures, which significantly outperform the traditional approaches on CWS task (Wang and Xu, 2017; Zhou et al., 2017; Yang et al., 2017; Cai et al., 2017; Chen et al., 2017; Huang et al., 2019b; Gan and Zhang, 2019; Yang et al., 2019). Cross-domain CWS As a more challenging task, cross-domain CWS has attracted increasing attention. Liu and Zhang (2012) propose an unsupervised model, in which they use a character clustering method and the self-training algorithm to jointly model CWS and POS-tagging. Liu et al. (2014) apply partial CRF for cross-domain CWS via obtaining a partial annotation dataset from freely available data. Similarly, Zhao et al. (2018) build partially labeled data by combining unlabeled data and lexicons. Zhang et al. (2018) propose to incorporate the predefined domain dictionary into the training process via predefined handcrafted rules. Ye et al. (2019) propose a semi-supervised approach that leverages word embeddings trained on the segmented text in the target domain. Adversarial Learning Adversarial learning is derived from the Generative Adversarial Nets (GAN) (Goodfellow et al., 2014), which has achieved huge success in the computer vision field. Recently, many works have tried to apply adversarial learning to NLP tasks. (Jia and Liang, 2017; Li et al., 2018; Farag et al., 2018) focus on learning or creating adversarial rules or examples for improving the robustness of the NLP systems. For cross-domain or cross-lingual sequence tagging, the adversarial discriminator is widely used to extract domain or language invariant features (Kim et al., 2017; Huang et al., 2019a; Zhou et al., 2019). 3 Our Approach Figure 2 shows the framework of our approach to cross-domain CWS, which is mainly composed 6664 Figure 2: Detailed architecture of DAAT, the left part is the structure of the Distant Annotation (DA) module. The annotated dataset on target domain will be sent to the Adversarial Training (AT) module on the right part. of two components: 1) Distant Annotation (DA), and 2) Adversarial Training (AT). In the following, we will describe details of the framework (DAAT) from the left to right in Figure 2. In this paper, bold-face letters (e.g. W ) are used to denote vectors, matrices and tensors. We use numerical subscripts to indicate the indices of a sequence or vector. We use the subscript of src to indicate the source domain and tgt to denote the target domain. 3.1 Distant Annotation As illustrated in Figure 2, given a labeled source domain dataset and an unlabeled target domain dataset, distant annotation (DA) aims to automatically generate word segmentation results for sentences in the target domain. DA has two main modules, including a base segmenter and a Domainspecific Words Miner. Specifically, the base segmenter is a GCNN-CRF (Wang and Xu, 2017) model trained solely on the labeled source domain data and is used to recognize words that are common among the source and target domains. Domain-specific Words Miner is designed to explore the target domain-specific words. Base Segmenter In the CWS task, given a sentence s = {c1, c2, ..., cn} , following the BMES tagging scheme, each character ci is assigned one of the labels in {B, M, E, S}, indicating whether the character is in the beginning, middle, end of a word, or the character is merely a single-character word. For a sentence s, we first use an embedding layer to obtain the embedding representation ei for each character ci. Then, the sentence s can be represented as e = {e1, e2, ..., en} ∈Rn×d, where d denotes the embedding dimension. e will be fed into the GCNN model (Dauphin et al., 2017; Gehring et al., 2017), which computes the output as: Hs = (e ∗W + b) ⊙σ(e ∗V + c), (1) here, W ∈Rk×d×l, b ∈Rl, V ∈Rk×d×l, c ∈Rl. d and l are the input and output dimensions respectively, and k is the window size of the convolution operator. σ is the sigmoid function and ⊙represents element-wise product. We adopt a stacking convolution architecture to capture long distance information, the output of the previous layers will be treated as input of the next layer. The final representation of sentence s is Hs = {h1, h2, ..., hn}. Correlations among labels are crucial factors in sequence tagging. Particularly, for an input sequence ssrc = {c1, c2, ..., cn} (take source domain data as example), the corresponding label sequence is L = {y1, y2, ..., yn}. The goal of CRF is to compute the conditional probability distribution: P(L|ssrc)= exp( nP i=1 (S(yi)+T(yi−1, yi))) P L′∈C exp( nP i=1 (S(y′ i)+T(y′ i−1, y′ i))) , (2) where T denotes the transition function to calculate the transition scores from yi−1 to yi. C contains all the possible label sequences on sequence s and L′ is a random label sequence in C. And S represents the score function to compute the emission 6665 score from the hidden feature vector hi to the corresponding label yi, which is defined as: S(yi) = W yihi + byi, (3) W yi and byi are learned parameters specific to the label yi. To decode the highest scored label sequence, a classic Viterbi (Viterbi, 1967) algorithm is utilized as the decoder. The loss function of the sequence tagger is defined as the sentence-level negative loglikelihood: Lsrc = − X log P(L|ssrc). (4) The loss of the target tagger Ltgt could be computed similarly. Domain-specific Words Miner As mentioned in section 1, previous works usually use existing domain dictionaries to solve the domain-specific noun entities segmentation problem in cross-domain CWS. But this strategy does not consider that it is properly difficult to acquire a dictionary with high quality for a brand new domain. In contrast, we develop a simple and efficient strategy to perform domain-specific words mining without any predefined dictionaries. Given large raw text on target domain and a base segmenter, we can obtain a set of segmented texts Γ = {T1, T2, ..., TN}, where stop-words are removed. Then let γ = {t1, t2, ..., tm} denote all the n-gram sequences extracted from Γ. For each sequence ti, we need to calculate the possibility that it is a valid word. In this procedure, four factors are mainly considered. 1) Mutual Information (MI). MI (Kraskov et al., 2004) is widely used to estimate the correlation of two random variables. Here, we use mutual information between different sub-strings to measure the internal tightness for a text segment, as shown in Figure 3(a). Further, in order to exclude extreme cases, it is necessary to enumerate all the sub-string candidates. The final MI score for one sequence ti consists of n characters ti = {c1...cn} is defined as: MIS(ti)= min j∈[1:n]{ p(ti) p(c1...cj) · p(cj+1...cn)}, (5) where p(·) denotes the probability given the whole corpus Γ. 2) Entropy Score (ES). Entropy is a crucial concept aiming at measuring the uncertainty of random variables in information theory (Jaynes, 1957). (a) Mutual Information to measure the internal tightness. (b) Entropy Score to measure the external flexibility. Figure 3: Examples of Mutual Score and Entropy Information factors. . Thus, we can use ES to measure the uncertainty of candidate text fragment, since higher uncertainty means a richer neighboring context. Let Nl(ti) = {l1, ..., lk} and Nr(ti) = {r1, ..., rk′} be the set of left and right adjacent characters for ti. The left entropy score ESl and right entropy ESr of ti can be formulated as ESl(ti)=Pk j −p(lj)log p(lj) and ESr(ti)=Pk′ j −p(rj)log p(rj) respectively. We choose min(ESl(ti), ESr(ti)) as the final score for ti. Hence, ES(ti) could explicitly represent the external flexibility for a text segment (as shown in Figure 3(b)), and further serve as an important indicator to judge whether the segment is an independent word. 3) tf-idf. tf-idf is a widely used numerical statistic that can reflect how important a word is to a document in a collection or corpus. As illustrated in Figure 1, most of the domain-specific words are noun entities, which share a large weighting factor in general. In this work, we define a word probability score pval(ti) to indicate how likely ti can be defined as a valid word. pval(ti)=σ(N[MIS(ti)]+N[ES(ti)]+N[tfidf(ti)]), (6) where σ denotes the sigmoid function and N denotes normalization operation with the max-min method. 4) Word frequency. If ti is a valid word, it should appear repeatedly in Γ. Finally, by setting an appropriate threshold for pval(ti) and word frequence, the Domain-Specific 6666 Words Miner could effectively explore domainspecific words, then construct the domain-specific word collection C for the target domain. In this work, we only consider words ti with pval(ti) ≥ 0.95 and frequency larger than 10. The left part of Figure 2 illustrates the data construction process of DA. First, we utilize the Domain-specific Words Miner to build the collection C for the target domain. Take sentence “溶酶 菌的科学研究(Scientific research on lysozyme)” as an example, we use the forward maximizing match algorithm based on C, which shows that “溶 酶菌(lysozyme)” is a valid word. Hence, the labels of characters “溶”, “酶”, “菌” are “B”, “M”, “E”. For the left part of the sentence, we adopt the baseline segmenter to perform the labelling process. “的科学研究” will be assigned with {“S”, “B”.“E”, “B”, “E”}. To this end, we are able to automatically build annotated dataset on the target domain. 3.2 Adversarial Training The structure of the Adversarial Training module is illustrated as the right part of Figure 2. As mentioned in 3.1, we construct an annotated dataset for the target domain. Accordingly, the inputs of the network are two labeled datasets from source domain S and target domain T . There are three encoders to extract features with different emphases, and all the encoders are based on GCNN as introduced in section 3.1. For domain-specific features, we adopt two independent encoders Esrc and Etgt for source domain and target domain. For domainagnostic features, we adopt a sharing encoder Eshr and a discriminator Gd, which will be both trained as adversarial players. For the two domain-specific encoders, the input sentence is ssrc={cs 1, cs 2, ..., cs n} from source domain, or sentence stgt={ct 1, ct 2, ..., ct m} from the target domain. The sequence representation of ssrc and stgt can be obtained by Esrc and Etgt. Thus, the domain independent representations of ssrc and stgt are Hs ∈Rn×l and Ht ∈Rm×l, where n and m denote the sequence lengths of ssrc and stgt respectively, l is the output dimension of GCNN encoder. For the sharing encoder, we hope that Eshr is able to generate representations that could fool the sentence level discriminator to correctly predict the domain of each sentence, such that Eshr finally extracts domain-agnostic features. Formally, given sentences ssrc and stgt from source domain and target domain, Eshr will produce sequence features H∗ s and H∗ t for ssrc and stgt respectively. The discriminator Gd of the network aims to distinguish the domain of each sentence. Specifically, we will feed the final representation H∗of every sentence s to a binary classifier Gy where we adopt the text CNN network (Kim, 2014). Gy will produce a probability that the input sentence s is from the source domain or target domain. Thus, the loss function of the discriminator is: Ld = −Es∼pS(s)[log Gy(Eshr(s)] −Es∼pT (s)[log (1 −Gy(Eshr(s))], (7) Features generated by the sharing encoder Eshr should be able to fool the discriminator to correctly predict the domain of s. Thus, the loss function for the sharing encoder Lc is a flipped version of Ld: Lc = −Es∼pS(s)[log (1 −Gy(Eshr(s)]) −Es∼pT (s)[logGy(Eshr(s)], (8) Finally, we concatenate H and H∗as the final sequence representation of the input sentence. For ssrc from source domain, H(ssrc) = [Hs ⊕H∗ s ], while for stgt from the target domain, H(stgt) = [Ht ⊕H∗ t ]. The final representation will be fed into the CRF tagger. So far, our model can be jointly trained in an endto-end manner with the standard back-propagation algorithm. More details about the adversarial training process are described in Algorithm 1. When there is no annotated dataset on the target domain, we could remove Ltgt during the adversarial training process and use the segmenter on source domain for evaluation. Algorithm 1 Adversarial training algorithm. Input: Manually annotated dataset Ds for source domain S, and distantly annotated dataset Dt for target domain T for i ←1 to epochs do for j ←1 to num of steps per epoch do Sample mini-batches Xs ∼Ds, Xt ∼Dt if j%2 = 1 then loss = Lsrc + Ltgt + Ld Update θ w.r.t loss else loss= Lsrc + Ltgt + Lc Update θ w.r.t loss end end end 6667 Dataset Sents Words Chars Domain SRC PKU Train 47.3K 1.1M 1.8M News Test 6.4K 0.2M 0.3M TGT DL Full 40.0K 2.0M 2.9M Novel Test 1.0K 32.0K 47.0K FR Full 148K 5.0M 7.1M Novel Test 1.0K 17.0K 25.0K ZX Full 59.0K 2.1M 3.0M Novel Test 1.0K 21K 31.0K DM Full 32.0K 0.7M 1.2M Medical Test 1.0K 17K 30K PT Full 17.0K 0.6M 0.9M Patent Test 1.0K 34.0K 57.0K Table 1: Statistics of datasets. The datasets of the target domain (TGT) are originally raw texts without golden segmentation, and the statistics are obtained by the baseline segmenter. The DA module will distantly annotate the datasets as mentioned in 3.1. 4 Experiments In this section, we conduct extensive cross-domain CWS experiments on multiple real-world datasets with different domains, then comprehensively evaluate our method and other approaches. 4.1 Datasets and Experimental Settings Datasets Six datasets across various domains are used in our work. The statistics of all datasets are shown in Table 1. In this paper, we use PKU dataset (Emerson, 2005) as the source domain data, which is a benchmark CWS dataset on the newswire domain. In addition, the other five datasets in other domains will be utilized as the target domain datasets. Among the five target domain datasets there are three Chinese fantasy novel datasets, including DL (DoLuoDaLu), FR (FanRenXiuXianZhuan) and ZX (ZhuXian) (Qiu and Zhang, 2015). An obvious advantage for fantasy novel datasets is that there are a large number of proper words originated by the author for each fiction, which could explicitly reflect the alleviation of the OOV problem for an approach. Besides the fiction datasets, we also use DM (dermatology) and PT (patent) datasets (Ye et al., 2019), which are from dermatology domain and patent domain respectively. All the domains of the target datasets are very different from the source dataset (newswire). To perform a fair and comprehensive evaluation, the full/test settings of the datasets follow Ye et al. (2019). Hyper-Parameters Table 2 shows the hyperparameters used in our method. All the models are implemented with Tensorflow (Abadi et al., 2016) and trained using mini-batched back-propagation. Adam optimizer (Kingma and Ba, 2015) is used for optimization. The models are trained on NVIDIA Tesla V100 GPUs with CUDA1. Evaluation Metrics We use standard microaveraged precision (P), recall (R) and F-measure as our evaluation metrics. We also compute OOV rates to reflect the degree of the OOV issue. 4.2 Compared Methods We make comprehensive experiments with selective previous proposed methods, which are: Partial CRF (Liu et al., 2014) builds partially annotated data using raw text and lexicons via handcrafted rules, then trains the CWS model based on both labeled dataset (PKU) and partially annotated data using CRF. CWS-DICT (Zhang et al., 2018) trains the CWS model with a BiLSTM-CRF architecture, which incorporates lexicon into a neural network by designing handcrafted feature templates. For fair comparison, we use the same domain dictionaries produced by the Domain-specific Words Miner for Partial CRF and CWS-DICT methods. WEBCWS (Ye et al., 2019) is a semi-supervised wordbased approach using word embeddings trained with segmented text on target domain to improve cross-domain CWS. Besides, we implement strong baselines to perform a comprehensive evaluation, which are: GCNN (PKU) uses the PKU dataset only, and we adopt the GCNN-CRF sequence tagging architecture (Wang and Xu, 2017). GCNN (Target) uses the distantly annotated dataset built on the target domain only. GCNN (Mix) uses the mixture dataset with both the PKU dataset and the distantly annotated target domain dataset. DA is a combination of GCNN (PKU) and domain-specific words. Details are introduced in 3.1. AT denotes the setting that we adopt adversarial training when no distantly annotated dataset on the target domain is provided, but the raw text is available. 4.3 Overall Results The final results are reported in Table 3, from which we can observe that: (1) Our DAAT model significantly outperforms previously proposed methods on all datasets, yielding the state-of-the-art results. Particularly, DAAT improves the F1-score on the five datasets from 93.5 to 94.1, 90.2 to 93.1, 89.6 to 90.9, 82.8 to 85.0 and 85.9 to 89.6 respectively. The results demon1source code and dataset will be available at https:// github.com/Alibaba-NLP/DAAT-CWS 6668 Hyper-parameter Name Value Threshold for pval 0.95 Char emb size 200 GCNN output dim 200 Text CNN num of filters 200 Text CNN filter size [3,4,5] GCNN layers 5 Dropout Rate 0.3 Batch size 128 Learning rate 0.001 Epochs 30 Table 2: Hyper-parameters. strate that the unified framework is empirically effective, for the alleviation of the OOV problem and the full utilization of source domain information. (2) As mentioned in section 3, the AT model uses the same adversarial training network as the DAAT, yet without annotation on the target domain dataset. Results on the AT setting could explicitly reflect the necessity to construct the annotated target domain dataset. Specifically, without the constructed dataset, the AT method only yields 90.7, 86.8, 85.0, 81.0 and 85.1 F1-scores on five datasets respectively. But when use the annotated target domain dataset, we can get the DAAT with the best performance. (3) WEB-CWS was the state-of-the-art approach that utilizes word embeddings trained on the segmented target text. Yet it is worth noticing that our model that only combines the base segmenter trained on PKU and domain-specific words (DA) could outperform WEB-CWS, which indicates that the distant annotation method could exploit more and deeper semantic features from the raw text. For the CWS-DICT method, which requires an external dictionary, we use the word collection (built by the Domain-specific Words Miner) to guarantee the fairness of the experiments. We can observe that our framework could yield significantly better results than CWS-DICT. Moreover, CWS-DICT needs existing dictionaries as external information, which is difficult for the model to transfer to brand new domains without specific dictionaries. In contrast, our framework utilizes the Domain-specific Words Miner to construct the word collection with high flexibility across domains. 4.4 Effect of Distant Annotation In this section, we focus on exploring the ability to tackle the OOV problem for the DA method, which could distantly construct an annotated dataset from the raw text on the target domain. As illustrated in Table 4, the cross-domain CWS task suffers from a surprisingly serious OOV problem. All OOV rates (source) are above 10%, which will definitely degrade model performance. Nevertheless, after constructing an annotated dataset on the target domain, the OOV rate (target) drops significantly. Specifically, the DA method yields 9.92%, 13.1%, 14.09% 20.51% and 14.94% absolute OOV rate drop on the five out-domain datasets. The statistical result reveals that the Domain-specific Words Miner could accurately explore specific domain words for any domains from raw texts. Therefore, the DA of our framework could efficaciously tackle the OOV problem. Moreover, the module does not need any specific domain dictionaries, which means it can be transferred to new domains without limitations. 4.5 Impact of the Threshold pval Obviously, the setting of the hyper-parameter pval will directly affect the scale and quality of the domain-specific word collection. To analyze how pval affects the model performance, we conduct experiments with different setting pval in {0.7, 0.8, 0.9, 0.95, 0.99}, and the size of word collection and model performance on DL and DM datasets are shown in Figure 4. Constant with intuition, the collection size will decrease as the increase of pval because the filter criterion for words will get more strict, which is also a process of noise reduction. However, the F1-score curves are not incremental or descending. When pval <= 0.95, the F1-scores on two datasets will increase because the eliminated words of this stage are mostly wrong. While the F1-scores will maintain or decrease when pval > 0.95, because in this case, some correct words will be eliminated. We set pval = 0.95 to guarantee the quality and quantity of the word collection simultaneously, so as to guarantee the model performance. And in this setting, the collection sizes are 0.7k words for DL, 1.7k for FR, 3.3k for ZX, 1.5k for DM and 2.2k for PT respectively. 4.6 Effect of Adversarial Learning We develop an adversarial training procedure to reduce the noise in the annotated dataset produced by DA. In Table 3, we find that GCNN (Target) method trained on the annotated target dataset constructed by DA achieves impressive performance on all the five datasets, outperforming the WEB-CWS method. In addition, with the adversarial training module, the model further yields the remark6669 Dataset Previous Methods (F1-score) Ours (F1-score) Partial CRF CWS-DICT WEB-CWS AT GCNN (PKU) DA GCNN(Mix) GCNN (Target) DAAT DL 92.5 92.0 93.5 90.7 90.0 93.6 93.9 93.9 94.1 (+0.6) FR 90.2 89.1 89.6 86.8 86.0 92.4 92.6 92.6 93.1 (+2.9) ZX 83.9 88.8 89.6 85.0 85.4 90.4 90.6 90.7 90.9 (+1.3) DM 82.8 81.2 82.2 81.0 82.4 83.8 83.9 84.3 85.0 (+2.2) PT 85.0 85.9 85.1 85.1 87.6 89.1 89.3 89.3 89.6 (+3.7) Table 3: The overall results on five datasets. The first block contains the latest cross-domain methods. And the second block reports the results for our implemented methods and DAAT. Numbers in the parentheses indicate absolute improvement than previous SOTA results. Dataset OOV rate (source) OOV rate (target) Source PKU 3.70% Target DL 11.15% 1.23% FR 14.08% 0.98% ZX 15.52% 1.43% DM 25.93% 5.42% PT 18.39% 3.45% Table 4: OOV rates on five datasets. OOV rate (source) means the OOV rate test dataset and PKU dataset. OOV rate (target) means the OOV rate between the test dataset and the constructed annotated target dataset. F1-score 80 85 90 95 100 Collection Size 0 1000 2000 3000 4000 5000 6000 7000 0.7 0.8 0.9 0.95 0.99 DM F1-score DL F1-score DM Collection Size DL Collection Size 75 100 F1-Score 50 Dataset 53.6 57.6 0 25 50 75 100 ᤒ໒ 2 WEB-CWS GCNN (PKU) GCNN (Target) Uni-ADA DL 77.8 57.6 80.9 81.8 FR 91.5 62.8 92.8 92.8 ZX 72.7 53.6 85.6 88.7 DM 72.7 61.4 73.3 75.6 PT 79.3 61.7 81.1 82.0 Datasets DL FR ZX D pval 7 Figure 4: The impact of different pval on mined collection size and model performance. able improvements of the F1-scores. The results demonstrate that the adversarial network could capture deeper semantic features than simply using the GCNN-CRF model, via better making use of the information from both source and target domains. 4.7 Analysis of Feature Distribution As introduced in 3.2, in the process of adversarial learning, domain-independent encoders could learn domain-specific features Hs and Ht, and the sharing encoder could learn domain-agnostic features H∗ s and H∗ t . We use t-SNE (Maaten and Hinton, 2008) algorithm to project these feature representations into planar points for visualization to further analyze the feature learning condition. As illustrated in Figure 5, domain-independent features Hs (a) Features on DM. (b) Features on DL. Figure 5: t-SNE visualisation of H and H∗produced by the domain independent encoder and sharing encoder. Where green points →Hs. black points →Ht, blue points →H∗ s , red points →H∗ t . Figure 6: The impact of data amount for the source and target data on PKU (source, 47.3k sentences) and DL (target, 40.0k sentences). (green) and Ht (black) have little overlap, indicating the distribution gap between different domains. However, the domain-agnostic feature distributions H∗ s (red) and H∗ t (blue) are very similar, implying that the learned feature representation can be well shared by both domains. 4.8 Impact of Amount from Source and Target data In this subsection, we analyze the impact of the data usage for both source and target domain, the experiment is conducted on the PKU (source) and DL (target) datasets. In Figure 6, we respectively select 20%, 40%, 60%, 80% and 100% of the source do6670 main data and 1%, 5%, 20%, 50%, 100% of the target domain data to perform the training procedure. The result demonstrates that increasing source and target data will both lead to an increase F1-score. Generally, the amount of the target data gives more impact on the whole performance, which conforms to the intuition. The “ 1% Target Training Data” line indicates that the performance of the model will be strictly limited if the target data is severely missing. But when the amount of the target data increase to 5%, the performance will be improved significantly, which shows the ability to explore domain-specific information for our method. 5 Conclusion In this paper, we intuitively propose a unified framework via coupling distant annotation and adversarial training for the cross-domain CWS task. In our method, we investigate an automatic distant annotator to build the labeled target domain dataset, effectively address the OOV issue. Further, an adversarial training procedure is designed to capture information from both the source and target domains. Empirical results show that our framework significantly outperforms other proposed methods, achieving the state-of-the-art result on all five datasets across different domains. Acknowledgments We sincerely thank all the reviewers for their insightful comments and suggestions. This research is partially supported by National Natural Science Foundation of China (Grant No. 61773229 and 61972219), the Basic Research Fund of Shenzhen City (Grand No. JCYJ20190813165003837), and Overseas Cooperation Research Fund of Graduate School at Shenzhen, Tsinghua University (Grant No. HW2018002). References Mart´ın Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: A system for large-scale machine learning. In Proceedings of OSDI, pages 265–283. Deng Cai and Hai Zhao. 2016. Neural word segmentation learning for Chinese. In Proceedings of ACL, pages 409–420. Deng Cai, Hai Zhao, Zhisong Zhang, Yuan Xin, Yongjian Wu, and Feiyue Huang. 2017. Fast and accurate neural word segmentation for Chinese. In Proceedings of ACL, volume 2, pages 608–615. Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, and Xuanjing Huang. 2015. Long short-term memory neural networks for Chinese word segmentation. In Proceedings of EMNLP, pages 1197–1206. Xinchi Chen, Zhan Shi, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-criteria learning for Chinese word segmentation. In Proceedings of ACL, volume 1, pages 1193–1203. Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In Proceedings of ICML, pages 933–941. Thomas Emerson. 2005. The second international Chinese word segmentation bakeoff. In Proceedings of SIGHAN workshop. Youmna Farag, Helen Yannakoudakis, and Ted Briscoe. 2018. Neural automated essay scoring and coherence modeling for adversarially crafted input. In Proceedings of NAACL, pages 263–271. Leilei Gan and Yue Zhang. 2019. Investigating selfattention network for Chinese word segmentation. arXiv preprint arXiv:1907.11512. Jianfeng Gao, Mu Li, Chang-Ning Huang, and Andi Wu. 2005. Chinese word segmentation and named entity recognition: A pragmatic approach. Computational Linguistics, 31(4):531–574. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of ICML, pages 1243–1252. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of NeurIPS, pages 2672–2680. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. Lifu Huang, Heng Ji, and Jonathan May. 2019a. Crosslingual multi-level adversarial transfer to enhance low-resource name tagging. In Proceedings of NAACL, pages 3823–3833. Weipeng Huang, Xingyi Cheng, Kunlong Chen, Taifeng Wang, and Wei Chu. 2019b. Toward fast and accurate neural Chinese word segmentation with multi-criteria learning. arXiv preprint arXiv:1903.04190. Edwin T Jaynes. 1957. Information theory and statistical mechanics. Physical review, 106(4):620. 6671 Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of EMNLP, pages 2021–2031. Joo-Kyung Kim, Young-Bum Kim, Ruhi Sarikaya, and Eric Fosler-Lussier. 2017. Cross-lingual transfer learning for pos tagging without cross-lingual resources. In Proceedings of EMNLP, pages 2832– 2838. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of EMNLP, pages 1746–1751. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR. Alexander Kraskov, Harald St¨ogbauer, and Peter Grassberger. 2004. Estimating mutual information. Physical review E, 69(6):066138. Zhongyang Li, Xiao Ding, and Ting Liu. 2018. Generating reasonable and diversified story ending using sequence to sequence model with adversarial training. In Proceedings of COLING, pages 1033–1043. Yang Liu and Yue Zhang. 2012. Unsupervised domain adaptation for joint segmentation and pos-tagging. In Proceedings of COLING, pages 745–754. Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, and Ting Liu. 2016. Exploring segment representations for neural segmentation models. In Proceedings of IJCAI, pages 2880–2886. Yijia Liu, Yue Zhang, Wanxiang Che, Ting Liu, and Fan Wu. 2014. Domain adaptation for crf-based Chinese word segmentation using free annotations. In Proceedings of EMNLP, pages 864–874. Ji Ma, Kuzman Ganchev, and David Weiss. 2018. State-of-the-art Chinese word segmentation with BiLSTMs. In Proceedings of EMNLP, pages 4902– 4908. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605. Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detection using conditional random fields. In Proceedings of CICLING, page 562. Likun Qiu and Yue Zhang. 2015. Word segmentation for Chinese novels. In Proceedings of AAAI, pages 2440–2446. Andrew Viterbi. 1967. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE transactions on Information Theory, 13(2):260–269. Chunqi Wang and Bo Xu. 2017. Convolutional neural network with word embeddings for Chinese word segmentation. In Proceedings of IJCNLP, volume 1, pages 163–172. Pak-kwong Wong and Chorkin Chan. 1996. Chinese word segmentation based on maximum matching and word binding force. In Proceedings of COLING, pages 200–203. Jie Yang, Yue Zhang, and Fei Dong. 2017. Neural word segmentation with rich pretraining. In Proceedings of ACL, volume 1, pages 839–849. Jie Yang, Yue Zhang, and Shuailong Liang. 2019. Subword encoding in lattice lstm for Chinese word segmentation. In Proceedings of NAACL, pages 2720– 2725. Yuxiao Ye, Weikang Li, Yue Zhang, Likun Qiu, and Jian Sun. 2019. Improving cross-domain Chinese word segmentation with word embeddings. In Proceedings of NACCL, pages 2726–2735. Qi Zhang, Xiaoyu Liu, and Jinlan Fu. 2018. Neural networks incorporating dictionaries for Chinese word segmentation. In Proceedings of AAAI, pages 5682– 5689. Hai Zhao, Chang-Ning Huang, Mu Li, and BaoLiang Lu. 2010. A unified character-based tagging framework for Chinese word segmentation. TALIP, 9(2):1–32. Lujun Zhao, Qi Zhang, Peng Wang, and Xiaoyu Liu. 2018. Neural networks incorporating unlabeled and partially-labeled data for cross-domain Chinese word segmentation. In Proceedings of IJCAI, pages 4602–4608. Hao Zhou, Zhenting Yu, Yue Zhang, Shujian Huang, Xinyu Dai, and Jiajun Chen. 2017. Word-context character embeddings for Chinese word segmentation. In Proceedings of EMNLP, pages 771–777. Joey Tianyi Zhou, Hao Zhang, Di Jin, Hongyuan Zhu, Meng Fang, Rick Siow Mong Goh, and Kenneth Kwok. 2019. Dual adversarial neural transfer for low-resource named entity recognition. In Proceedings of ACL, pages 3461–3471.
2020
595
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6672–6681 July 5 - 10, 2020. c⃝2020 Association for Computational Linguistics 6672 Modeling Morphological Typology for Unsupervised Learning of Language Morphology Hongzhi Xu1,3, Jordan Kodner2, Mitch Marcus1, Charles Yang2 1CIS Department, University of Pennsylvania, Philadelphia, USA 2Linguistics Department, University of Pennsylvania, Philadelphia, USA 3ICSA Institute, Shanghai International Studies University, Shanghai, China [email protected], [email protected] [email protected], [email protected] Abstract This paper describes a language-independent model for fully unsupervised morphological analysis that exploits a universal framework leveraging morphological typology. By modeling morphological processes including suffixation, prefixation, infixation, and full and partial reduplication with constrained stem change rules, our system effectively constrains the search space and offers a wide coverage in terms of morphological typology. The system is tested on nine typologically and genetically diverse languages, and shows superior performance over leading systems. We also investigate the effect of an oracle that provides only a handful of bits per language to signal morphological type. 1 Introduction Morphological analysis aims to identify languages’ word-internal structures. Early approaches to the computational analysis of morphology modeled the structure of each language with hand-built rules, (e.g. Sproat, 1992). Such systems require a significant amount of work from domain experts, and while they tend to be very accurate, they also suffer from low coverage. Supervised and semisupervised machine learning approaches require expert input and will suffer from out-of-vocabulary problems. This paper focuses primarily on fully unsupervised morphological learning, which offers the most flexibility and can be deployed for new languages with no data annotation. Concatenation-based morphological learning systems aim to identify morphemes or morpheme boundaries within words (Virpioja et al., 2013; Goldwater and Johnson, 2004; Creutz and Lagus, 2005, 2007; Lignos, 2010; Poon et al., 2009; Snyder and Barzilay, 2008). The Morpho-Challenge tasks1 provide a set of morphologically annotated 1http://morpho.aalto.fi/events/morphochallenge/ data for testing concatenation. However, systems designed directly for identifying morpheme boundaries are limited in that non-linear structures such as infixation cannot be well captured. Another approach exploits morphological relations between word pairs. Related words form morphological chains through processes of derivation. There are many such processes including affixation at the edges or middle of a word, reduplication, stem transformations, and so on. Of these, only edge-affixation is available to concatenation-based models, so leveraging derivation directly allows for wider cross-linguistic coverage (Schone and Jurafsky, 2001; Narasimhan et al., 2015; Soricut and Och, 2015; Luo et al., 2017; Xu et al., 2018). A more holistic line of work builds learning on the concept of morphological paradigms (Parkes et al., 1998; Goldsmith, 2001; Chan, 2006; Xu et al., 2018). Paradigms can be defined as sets of morphological processes applicable to homogeneous groups of words. For example, the paradigm (NULL, -er, -est, -ly) in English can be applied to adjectives (e.g., high, higher, highest, highly), while (NULL, -ing, -ed, -s, -er) is defined over verbs (e.g, walk, walking, walked, walks, walker). Paradigms have several merits. First, they provide a principled strategy for tackling the data sparsity problem. In morphologically rich languages, a single word can derive hundreds of forms most of which will be unattested in real data. This can be addressed by taking paradigms into account because if a word appears in part of the paradigm, it likely can appear in the rest too. The recent SIGMORPHON shared tasks in paradigm filling are along this line (Cotterell et al., 2016, 2017, 2018). Second, paradigms can be used to identify spurious morphological analyses. For example, the words within, without, wither might be analyzed as applying suffixes -in, -out, -er to the word with, however, the paradigm (-in, -out, -er) is not reliable since it only applies to 6673 one single word, i.e. with. One thread common in previous work is the lack of consideration for characteristics of languagespecific morphological typology. In this paper, we propose a new framework that incorporates typological awareness by explicitly modeling different morphological patterns including suffixation, prefixation, infixation, and reduplication. These patterns have covered most common morphological processes of the languages in the world, with the exception of templatic morphology which is not represented in the LDC-provided test sets. By building such universal linguistic knowledge, the model will benefit from both constraining the search space (without generating a large amount of spurious analyses) and providing a wider coverage especially for the non-linear morphological structures. 2 Related Work The Morpho-Challenge tasks held between 2005 and 2010 motivated a large amount of work on unsupervised morphology learning including the Morfessor family of models. The Morfessor baseline system (Creutz and Lagus, 2002; Virpioja et al., 2013), an MDL model, is one of the most popular unsupervised systems for automatic morphological segmentation. Creutz and Lagus (2005, 2007) extend the model with the maximum a posteriori (MAP) on both observed data and the model. These systems only require word lists as input, which is an advantage for low-resource languages where there is no large corpus for training complex models. Various work has explored the idea of paradigms. Parkes et al. (1998) try to learn inflectional paradigms on English verbs, Goldsmith (2001, 2006) exploits the MDL principle to learn paradigms (referred to as signatures) with a greedy search strategy, and Dreyer and Eisner (2011) adopt a semi-supervised log-linear model to identify paradigms, which requires a number of seed paradigms for training. However, in morphologically rich languages such as Turkish where a single paradigm can be extremely large, this method requires considerable human annotation effort. Ahlberg et al. (2014) use a semi-supervised approach to learn abstract paradigms from a given inflection table. However, the task is different from what we discuss here, which discovers inflection tables as an intermediate step. Xu et al. (2018) create paradigms from the results of a probabilistic model and use the reliable paradigms to prune unreliable ones and achieve promising results. Xu et al. (2018)’s model only deals with suffixation. The framework that we develop in this paper is most directly inspired by Xu et al. (2018). Schone and Jurafsky (2001) use semantic information to identify real morphological pairs from a set of orthographically similar word pairs. Similarly, Soricut and Och (2015) use orthographic information to generate candidate morphological rules, e.g., prefix : $ : in, and then use word embeddings to evaluate the qualities of the rules. Narasimhan et al. (2015) create morphological chains, e.g., (play, playful, playfully), using both orthographic information and distributional semantics by maximizing the likelihood through a loglinear model. One drawback of using distributional information is that it requires large text corpora to train reliable semantic vectors. This is a major hurdle for applying such a system to low-resource languages. Based on the output of Narasimhan et al. (2015)’s model, Luo et al. (2017) adopt integer linear programming (ILP) to find globally optimal paradigms, which they call morphological forests, and achieve improved performance. 3 Morphological Typology This section surveys the morphological phenomena frequently observed among the world’s languages which our system is able to account for. 3.1 Prefixation, Suffixation, and Infixation Affixation is the appending of a bound morpheme or affix onto either end of a word and is the most common kind of morphological operation (Dryer, 2013). Affixes postpended to a word are called suffixes such as -ed, -ing, -ness, or -est in English, while prefixes are prepended such as pre- or un-, and infixes find their way into the middle of a root. Infixes are rarer cross-linguistically, but they do surface around the world, notably in languages like Tagalog (Malayo-Polynesian), dulot ∼d-in-ulot or graduate ∼gr-um-aduate. Many languages stack or nest affixes. English derivational morphology does this occasionally as in anti-dis-establish-ment-ari-an-ism or in Shona (S Bantu) inflectional morphology, for example, hamu-cha-mbo-nyatso-ndi-rov-es-i=wo ‘You will not cause me to be beaten’ (Mugari, 2013). A given affix may never appear on the edge of a word since it can be obligatorily followed or preceded by more affixes. This can be seen in Bantu verbs which nec6674 essarily end with a so-called final vowel morpheme (here, -a). Most other suffixes have to appear before the final vowel, so they are never themselves suffixes in the string sense. For example, given the Shona ku-pig-a ‘to strike,’ one could form kupig-an-a ‘to strike one another’ or ku-pig-w-a ‘to be stricken’ but not *ku-pig-w or *ku-pig-an. We will refer to the disconnect between morphological suffixation and string suffixation as the final vowel problem. 3.2 Reduplication and Partial Reduplication Reduplication, the doubling of all or a part of a word, is productive in many languages, especially outside modern Europe (Rubino, 2013). Full reduplication can indicate plural number, repeated actions, or progressive aspect in Austronesian languages such as Indonesian and Tagalog. In Indonesian, sometimes a whole word including its affixes is reduplicated (bangun-an-bangun-an), while other times it is only the root (deg-deg-an or ber-bondong-bondong). Partial reduplication is exemplified in Pangasinan, an Austronesian relative of Tagalog, which has more productive partial reduplication for plurals. It can surface on the left (plato ∼pa-pláto), or it may be infixed (amigo ∼ ami-mí-go) (Rubino, 2001). 3.3 Stem Changes Some morphology is expressed through stem changes rather than string concatenation. English often expresses past tense, past participles, and plurals with changes to stem vowels, sometimes in conjunction with affixation (sing ∼sang ∼sung, freeze ∼froze ∼froz-en, and goose ∼geese). Consonants can alternate as well, for example in Finnish luku ∼luvu-t and etsi-nt-ä ∼etsi-nn-ät. Some changes are morphophonological because they are related to the phonology of the language and thus are somewhat predictable. For example, the Latin root scrib becomes scrip-t-us in the past participle because /b/ is devoiced before /t/. These contrast with alternations like goose ∼geese which are arbitrary – there is no moose ∼*meese. Vowel harmony is a kind of pervasive global morphophonological pattern which forces vowels in a word to share certain features. In the simplest case, this often results in affix allomorphy where each affix has alternate forms that agree with the features in the root or the root must agree with the affixes. Finnish presents a classic example of frontback vowel harmony: a word may contain front vowels (ä, ö, ÿ) or back vowels (a, o, u) but not both. Suffixes have front and back allomorphs in order to agree with the stem. For example, contrast the front-containing suffixes after front-containing root liity-nt-öjä with the same suffixes after a backcontaining root liiku-nt-oja. 4 Modeling Morphological Processes In this section, we describe our framework for modeling language morphologies, including prefixation, suffixation, infixation, full and partial reduplication. We also model stem changes that typically occur at word boundaries except for vowel changes. 4.1 Morphology as Lexical Pairs Many theories of morphology such as paradigmbased morphology, e.g. Paradigm Function Morphology (Stump, 2001), cast morphology as a relation between word pairs. We adopt this perspective as the basis of our framework, except that we do not differentiate derivational morphology from inflection. In detail, the framework assumes morphology to be an operation that is applied to a word (root) to form another word and effects a change in meaning along some dimension, e.g., adding information such as case, number, gender, tense, or aspect. We denote such a morphological process with a function f. The function takes a root word r as input and forms a new word w, i.e. f(r) = w. Thus the task of morphology learning can be defined as searching for a function f and another word r, given a word w, such that f(r) = w. 4.2 Constraining the Search Space with Morphological Typology Here, we describe how we incorporate prefixation, suffixation, infixation, and full and partial reduplication to constrain the morphological function space. This improves over naive methods focusing on edit distance, which can be used to evaluate how good a morphological function is locally. Globally, a morphological function can be evaluated by observing its overall frequency, namely its corpus productivity in a language. Such a simple system would tend to hallucinate many spurious yet frequent morphological functions, which may not be possible morphologically from a richer linguistic perspective. Morphological patterns allow us to represent the derivation of complex words from root words. A prefixation pattern can be defined as <prefix>_x, 6675 where <prefix> is a specific prefix in a language, and x stands for the root. For example, the pattern <un->_x describes how the word unfold can be derived from fold with a prefix. A suffixation pattern can be defined as x_<suffix> and an infixation pattern can be defined similarly as bx_<infix>_ex, where bx and ex are the beginning and ending part of the root word x and x = bx + ex. Reduplication functions can be defined in the same way. A full reduplication pattern is defined as x_x. A partial reduplication can be defined as bx_x (bx ̸= x) with the partial copy of x on the left or x_ex (ex ̸= x) with the partial copy on the right. Table 1 shows all the morphological patterns associated with examples from different languages. Morphological Type Eg. Func Eg. words Prefixation <di->_x di-bangun2 Infixation bx_<-in->_ex d-in-ulot4 Suffixation x_<-ε> kyerε-ε1 Full reduplication x_x kyerε-kyerε1 Partial reduplication (L) bx_x ka-kain4 Partial reduplication (R) x_ex Final Vowel / Theme V x-v<a> pig-a3 Table 1: Morphological operations with example patterns and words in 1 Akan, 2 Indonesian, 3 Swahili, and 4 Tagalog. No right partial reduplication is present in our test set. 4.3 Morphophonological Rules Here, we define the stem change rules that are motivated by morphophonological observations on languages which we denote with the function g. We extend the capabilities of previous systems (Narasimhan et al., 2015; Xu et al., 2018) and model six transformation rules as follows: Insertion (INS) of a letter at the end of the root. E.g. the Spanish word quiera can be analyzed as (quer, -a, INS-i). Deletion (DEL) of the end letter of the root. E.g. using can be analyzed as (use, -ing, DEL-e). Gemination (GEM) of the end letter of the root. E.g. stopped can be analyzed as (stop, -ed, GEMp). Degemination (DEG) of the end letter of the root if it is in a reduplication form. E.g. the Finnish word katot can be analyzed as (katto, -t, DEG-t). Substitution (SUB) of the end letter of the root with another. E.g. the word carries can be analyzed as (carry, -es, SUB-y-i). VowelChange (VOW) of the right or left most vowel of the root with another. For example, the word drunken can be analyzed as (drink, -en, VOWi-u). This feature requires the system to be aware of a global vowel inventory. 4.4 Generating Candidate Morphological Functions A morphological function is defined as two parts: the morphological pattern, and the corresponding stem changes, f = [<stem_change>, <morph_pat>], where <stem_change> is first applied to the root, with the output fed into the <morph_pat> to generate the derived word. A detailed definition can be denoted as f(r) = [g(x), <prefix>_x](r), where r is the root word which can apply this rule to derive another word, and g is a stem change function. For example, a prefixation function f(r) = [$(x), <un->_x](r) (where $(x) means no stem change applies) can be applied to the verb fold to generate the verb unfold. Similarly, a suffixation function f(r) = [SUB-y-i(x), x_<-ed>](r) can be applied to the verb carry to generate the verb carri-ed. We can define an infixation function f(r) = [($(bx), $(ex)), bx_<-um->_ex](r); when applied to the word kakain, it can generate the verb k-um-akain. A full reduplication function can be defined as f(r) = [$(x), $(x)), x_x](r); when applied to the word ‘kyerε’, it can generate the verb kyerε-kyerε. A partial reduplication function f(r) = [($(bx), $(x)), bx_x](r), when applied to the word kain, can generate the verb ka-kain. The central phase of learning involves generating potential morphological functions. During this phase, no stem changes are allowed in order to limit spurious functions. Learning is done by comparing each word pair and postulating a function f that can explain the pair, where the function f is constrained through morphological typology as described in Section 3. For example, given the word pair (fold, unfold), we can postulate a prefixation function f(r) = [$(x), <un->_x](r); given word pair (kain, kakain), we can postulate a left partial reduplication function f(r) = [($(bx), $(x)), bx_x](r). For affixation, including prefixation, infixation, and suffixation, a set of candidate affixes is needed before generating morphological functions. This can be done by comparing all possible word pairs, a similar method used by previous studies (e.g. Narasimhan et al., 2015; Xu et al., 2018). For prefixes, if w = s + w′, where w and w′ are both attested words in the word list, then s is a can6676 didate prefix. We use the cardinality of the set {(w, w′) : w = s + w′} to evaluate how good the candidate prefix s is. Similarly, for suffixes, if w = w′+s, then s is a candidate suffix. For infixes, if w = bw′+s+ew′, where w and w′ = bw′+ew′ are both attested words in the word list, then s is a candidate infix. Finally, only the top N most frequent candidates for each affix type are selected. 4.5 Searching for Candidate Analyses for Individual Words After generating all morphological functions reflecting each morphological type, searching for candidate analyses for individual words is conceptually straightforward. For a given word w, we find all possible morphological functions {f : f = [g, m]} associated with a root word r, such that w = f(r). For example, the word reread can be analyzed as <re->_X, bx_<-re->_ex, and bx_x. This is somewhat complicated by the need to find possible morphophonological (stem change) rules on the root words. The basic idea is that when checking a possible prefixation pattern, for example w = s + w′, rather than assuming w′ is an attested word, we assume that if there is an attested word w′′ and a potential stem change rule g, such that w′ = g(w′′), then <s>_x is a potential prefixation pattern for w. We can easily create an index based on the attested words to accelerate the searching process. Searching for suffixation and infixation can be done is a similar way. For reduplication, we use a similar strategy. If w = bw′ + w′, i.e. a word w can be decomposed into another word w′ plus a string prefix of w′ on the left, then we postulate a partial reduplication pattern for word w, i.e. bx_x. If w = w′ + ew′, then x_ex can be generated. For example, given that the word reread = re + read and read is itself a word, we can hypothesize that the word is bx_x. For full reduplication, if a word w = w′ + w′, where w′ is another word, then a morphological pattern x_x can be generated for w. For more complicated cases, we extend the search for reduplication of individual words with possible stem change rules. For partial reduplication, if a word w = s + w′, and there is a stem change rule g, such that s = g(bw′), then we can also postulate a partial reduplication pattern for w, with a stem change rule on bw′. Similarly, if a word w = s + s′, and there is a stem change function g and an attested word w′ such that s′ = g(w′) and s = bw′, then we can also postulate a partial reduplication pattern for w with a stem change rule on w′. For full reduplication, if a word w = s + s′, there are (up to) two stem change functions g and g′, and a word w′, such that s = g(w′) and s′ = g′(w′), then we can postulate a full reduplication pattern for w. 4.5.1 Further Decreasing the Search Space A large number of spurious candidate analyses will be generated once we allow stem change rules. However, some candidate analyses can be ruled out given other candidates. For example, the word ‘saying’ can be analyzed as (say, $, x_<-ing>), but also as (says, DEL-<s>, x_<-ing>), but the latter one is unnecessary given the former one and a heuristic that says that no stem changes are to be preferred to stem changes. So, to further decrease the search space, we employ a set of heuristics to eliminate some of the candidate analyses before the next step. They follow a principle of parsimony, namely once a simpler analysis is generated, the more complicated ones that are related will be excluded.2 5 Disambiguation with a Probabilistic Model After generating all candidate analyses for a given word, we evaluate how good each candidate is so we can choose the best one as the final analysis. We compute the conditional probability of a candidate analysis [g, m](r) given a word w = [g, m](r)), namely P(r, g, m|w). P(r, g, m|w) = 0 if [g, m](r) ̸= w. Otherwise, we use the following formula to calculate this probability. P(r, g, m|w) = P(r, g, m) P (r′,g′,m′)=w P(r′, g′, m′) (1) To compute the probability of a candidate analysis (w = [g, m](r)), P(r, g, m), we assume that r, g and m are independent to each other. So, P(r, g, m)=P(r) × P(g) × P(m) (2) The probabilities in this model can be estimated using EM initialized by counting all the candidate analyses of all words in the word list and assuming that each candidate has the same probability. 2The details will be given in a separate document with the code that will be made publicly available before the conference. 6677 5.1 Solving Oversegmentations with Paradigms We extend Xu et al. (2018)’s work and use statistically reliable paradigms for filtering unreliable ones. In detail, a paradigm is defined by Xu et al. (2018) upon a set of suffixes. Here, we extend this definition to a mixture of different types of morphological processes, i.e. M = {m}, that can be applied to the same set of roots R = {r} to be in a paradigm. Formally, a paradigm is defined as p = R × M. Finally, the paradigms with at least 2 × 2 sizes are selected as reliable ones, namely at least two morphological patterns supported by at least two roots. Similar to Xu et al. (2018), stem changes are not part of the paradigm since they are generally independent processes. After finding possible paradigms, we use the same method for pruning unreliable paradigms. Given an unreliable paradigm p = R × M, the intersection of the morphological pattern set M and the set Mi of each reliable paradigm pi is computed, i.e. M′ i = M ∩Mi, and the one with the best score, e.g. M′ k will be chosen as the pruned result, i.e. p′ = R × M′ k. Finally, the score of an intersection M′ i is the sum of the frequencies of all the morphological patterns in the intersection, as shown in equation 3. score(M)=P m∈M freq(m) (3) 5.2 Generating Morphological Derivations After the one-step roots of all the words are found, morphological derivations (e.g., sterile, sterilize, sterilizing) are automatically generated iteratively by our system as well as final segmentations (e.g., steril-iz-ing). As described in the next section, because evaluation will be based on morpheme boundaries identification, generating such a segmentation is necessary. 6 Experiments 6.1 Settings We compare our model with Morfessor (Virpioja et al., 2013), the most popular baseline, MorphoChain (MC) (Narasimhan et al., 2015) and its improved version, Morph-Forest (MF) (Luo et al., 2017), and ParaMA (PMA) (Xu et al., 2018). We evaluate the models with segmentation points (boundaries of morphemes), the same metric used by Narasimhan et al. (2015) and Xu et al. (2018). We run our model in two different settings. In Lang Train Test Corpus Morphology Aka 74K 2K 3M pref, suf, red Hin 487K 2K 28M pref, suf, red Hun 4,390K 2K 574M pref, suf Ind 525K 2K 19M pref, suf, inf, red Rus 1,485K 2K 1,068M pref, suf Spa 564K 2K 24M pref, suf Swa 224K 2K 4M pref, suf, red, fv Tag 13K 2K 5M pref, suf, inf, lred, red Tam 2,363K 2K 47M pref, suf, red Table 2: Number of word types for training and testing, corpus size for training word vectors (only for MorphoChain and Morph-Forest systems), and the morphological features (pref: prefixation; suf: suffixation; inf: infixation; red: full reduplication; lred: left reduplication; fv: final vowel) for each language. the primary experiment, we run it as a fully unsupervised model (FU), assuming all possible typological features. In a secondary experiment, each language’s morphological typology is provided by an oracle so that the model can only search relevant patterns per language (U+T). A vowel inventory is also provided so that our system can discover the vowel change rules described in Section 4.3. MC and MF are run in two different configurations, one with semantic vectors (+v) and the other without vectors (more comparable to Morfessor, ParaMA, and our system). We conduct the experiments with a data set containing 9 languages from diverse language families (Mott et al., 2020). The details of the data sets including the typological features for each language and the size of corpus that is used for training word vectors are shown in Table 2. The word lists used for training are extracted from the language pack created under the DARPA LORELEI (LOw REsource Languages and Emergent Incidents) program. The gold standard data, soon to be released by LDC, is annotated only with morpheme segmentations, and no data annotation was used in training. The languages with non-Latin scripts were romanized with the tools provided in the package. 6.2 Experimental Results and Analyses Results are presented in Figure 1. The details are shown in Table 3 and Table 4. Both our unsupervised model (FU) and model with given typology (U+T) achieve higher average F1 than previous work by a large margin, the highest on five of nine languages, and competitive results overall on the other four. Of the two systems, the typology feature oracle provided only slightly better average performance than fully unsupervised. As expected, 6678 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Aka Hin Hun Ind Rus Spa Swa Tag Tam Average Morfessor MC MF MC+vec MF+vec ParaMA U+TYPL FU Figure 1: Comparison of different systems in F1 scores on the nine languages and their average. FU and U+T are our systems. FU is fully unsupervised, while U+T is unsupervised except given six flags for language typology. Morf MC MF MC+v MF+v PMA U+T FU Aka 0.633 0.650 0.646 0.498 0.530 0.530 0.680 0.679 Hin 0.258 0.359 0.346 0.494 0.505 0.586 0.432 0.398 Hun 0.407 0.532 0.533 0.622 0.619 0.532 0.554 0.551 Ind 0.532 0.499 0.497 0.561 0.622 0.469 0.686 0.682 Rus 0.347 0.492 0.493 0.427 0.450 0.458 0.493 0.490 Spa 0.250 0.498 0.502 0.051 0.034 0.405 0.473 0.472 Swa 0.432 0.430 0.409 0.202 0.189 0.343 0.512 0.533 Tag 0.525 0.484 0.470 0.430 0.439 0.411 0.587 0.566 Tam 0.237 0.293 0.291 0.341 0.336 0.396 0.446 0.426 Avg 0.402 0.471 0.465 0.403 0.414 0.459 0.541 0.533 Table 3: Experimental results in F1 measures on the nine languages including our unsupervised (FU) and oracle (U+T) system. The best score for each language is highlighted, considering each of our systems separately against previous work. Morf MC MF MC+v MF+v PMA U+T FU P 0.618 0.387 0.391 0.504 0.523 0.514 0.525 0.495 R 0.317 0.647 0.623 0.370 0.383 0.428 0.576 0.593 F1 0.402 0.471 0.465 0.403 0.414 0.459 0.541 0.533 Table 4: Average performance of the systems in precision, recall and F1 measures. Best result in bold, considering all systems together. given the very low-resource setting, the vector configuration harms the performance of both MC and MF in languages such as Akan, Spanish, Swahili and Tagalog. Even though Russian has a larger corpus, the vectors still harm performance, which we believe is due to its complicated morphology that demands many examples to train reliable vectors. While having separate patterns for each morphology type seems to improve numbers, oracle information improves results only slightly, mostly on Hindi, Tagalog, and Tamil. Interestingly, the performance on Swahili has been noticeably decreased. Based on detailed observation, this is due to our infixation search providing an unexpected benefit for Swahili, a language with no linguistic infixation, but the final vowel pattern, by allowing us to capture string-internal linguistic suffixes as Aka Hin Hun Ind Rus Spa Swa Tag Tam Avg U+T 0.68 0.432 0.554 0.686 0.493 0.473 0.512 0.587 0.446 0.541 Pref+Suf 0.683 0.411 0.554 0.67 0.493 0.473 0.512 0.522 0.445 0.529 0.3 0.4 0.5 0.6 0.7 0.8 U+T Pref+Suf Figure 2: The performance of our model with oracle typological features (U+T) and with only prefixation and suffixation (Pref+Suf). in the passive suffix -w- extracted from the verb kunyang’anywa here as bx_<w>_ex. In all, the performance of our model in either mode is better than the other systems we tested. To test the contribution of morphological patterns other than prefixation and suffixation, we perform an ablation study, running the system with only prefixation and suffixation enabled. The results are shown in Figure 2. First, most performance for most languages is due to prefixation and suffixation since these are predominant for most languages. However, performance decreases measurably for Tagalog, Indonesian and Hindi due to the presence of more complex morphological patterns. This shows that modeling morphological features other than prefixation and suffixation has important benefits on languages with complicated morphology. 6.3 Discussion Our system, in both its configurations, achieves the highest average performance among those tested. It has other advantages as well. Firstly, although our model is evaluated in terms of morpheme boundaries, it produces much richer structures than that. 6679 It determines how a complex word is derived from another one through a particular morphological process such as prefixation, suffixation, infixation or full or partial reduplication. In comparison, other systems including Morpho-Chain, Morph-Forest, and ParaMA only deal with prefixes and suffixes. Our experiments as shown in Figure 2 indicate that modeling morphological patterns/processes other than prefixation and suffixation are useful. Systems that directly find morpheme boundaries such as Morfessor are not aware of the particular morphological processes that a word’s derivation goes through. So for infixed words, for example, even if the morpheme boundaries are correctly identified by such systems, they will incorrectly characterize the word as containing three morphemes rather than two. Such analyses are incorrect even though they are not penalized under a boundarybased evaluation metric. By modeling different types of morphological structures, our system can be used to study the productivity of each morphological process and thus can be used for a quantitative analysis for theoretical morphological studies in linguistics. Figure 3 shows the number of instances of each type of morphological process generated by our fully unsupervised model. Suffixation and prefixation are the most common processes. Most of our test languages exhibit more suffixation than prefixation, but Swahili has more prefixation than suffixation, as expected for a Bantu language. Figure 3 also shows that reduplication is rarer than other affixation. However, our model does discover full and left-partial reduplication successfully in languages that exhibit it. For example, about 1% of Akan words and fewer than 1% of Indonesian, Swahili and Tagalog words were analyzed with full or partial reduplication. Infixation is challenging to correctly identify because infixes can appear in almost any position inside a word, and therefore generate a large search space. Our unsupervised system uses infixation to represent both true morphological infixation as in Tagalog as well as word-internal agglutinative suffixation as in Swahili, Hindi, and Tamil. This hurts the performance for Hindi and Tamil, but provides a benefit for Swahili as discussed above. Finally, our system is fast, typically completing in several minutes, similar to ParaMA. Other systems including Morfessor, MC and MF typically require several hours, or even days on longer word 0 0.1 0.2 0.3 0.4 0.5 0.6 aka hin hun ind rus spa swa tag tam SUF PREF INF RED LRED RRED 0 0.002 0.004 0.006 0.008 0.01 0.012 aka hin hun ind rus spa swa tag tam RED LRED RRED Figure 3: Normalized distribution of morphological patterns discovered by our unsupervised model for each language (top) and zoomed in on less frequent patterns (bottom). SUF: suffix, PREF: prefix, INF: infix, RED: full reduplication, LRED: left partial reduplication, RRED: right partial reduplication. lists such as for Hungarian and Russian. 7 Conclusion and Future Work In this paper, we develop a model for morphological analysis that exploits typological features to achieve the best performance on a wide range of languages. The tool is publicly available here: https://github.com/xuhongzhi/ParaMA2. This unsupervised model can be quickly and easily extended to novel languages without data annotation or expert input. Combined with the ability to process infixation and reduplication, our system improves access for geographically diverse lowresource languages. Although the evaluation is based on segmentation points, our model outputs much richer structure. It can also tell us the productivity of each morphological process and thus can obtain much deeper knowledge in terms of morphological structures of languages. Our next step will be to attempt to automate the determination of language typology, yielding somewhat better performance with a system requiring no human intervention per language at all. Future work will aim to extend the current model to capture particularly challenging morphological patterns such as templatic non-concatenative morphology and polysynthetic composition. 6680 Acknowledgements We thank the rest of the University of Pennsylvania’s LORELEI research team for the helpful discussions. We also thank the anonymous reviewers for their valuable and constructive comments for improving our paper. This research was funded by the DARPA LORELEI program under Agreement No. HR0011-15-2-0023. References Malin Ahlberg, Mans Hulden, and Markus Forsberg. 2014. Semi-supervised learning of morphological paradigms and lexicons. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 569–578. Erwin Chan. 2006. Learning probabilistic paradigms for morphology in a latent class model. In Proceedings of the Eighth Meeting of the ACL Special Interest Group on Computational Phonology and Morphology, pages 69–78. Association for Computational Linguistics. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Arya D McCarthy, Katharina Kann, Sebastian Mielke, Garrett Nicolai, Miikka Silfverberg, et al. 2018. The conll–sigmorphon 2018 shared task: Universal morphological reinflection. arXiv preprint arXiv:1810.07125. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra Kübler, David Yarowsky, Jason Eisner, et al. 2017. Conllsigmorphon 2017 shared task: Universal morphological reinflection in 52 languages. arXiv preprint arXiv:1706.09031. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The sigmorphon 2016 shared task— morphological reinflection. In Proceedings of the 2016 Meeting of SIGMORPHON, Berlin, Germany. Association for Computational Linguistics. Mathias Creutz and Krista Lagus. 2002. Unsupervised discovery of morphemes. In Proceedings of the ACL-02 workshop on Morphological and phonological learning-Volume 6, pages 21–30. Association for Computational Linguistics. Mathias Creutz and Krista Lagus. 2005. Unsupervised morpheme segmentation and morphology induction from text corpora using Morfessor 1.0. Helsinki University of Technology. Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphology learning. ACM Transactions on Speech and Language Processing (TSLP), 4(3):1–34. Markus Dreyer and Jason Eisner. 2011. Discovering morphological paradigms from plain text using a dirichlet process mixture model. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 616–627. Association for Computational Linguistics. Matthew S. Dryer. 2013. Prefixing vs. suffixing in inflectional morphology. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. John Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computational linguistics, 27(2):153–198. John Goldsmith. 2006. An algorithm for the unsupervised learning of morphology. Natural Language Engineering, 12(4):353–371. Sharon Goldwater and Mark Johnson. 2004. Priors in bayesian learning of phonological rules. In Proceedings of the 7th Meeting of the ACL Special Interest Group in Computational Phonology: Current Themes in Computational Phonology and Morphology, pages 35–42. Association for Computational Linguistics. Constantine Lignos. 2010. Learning from unseen data. In Proceedings of the Morpho Challenge 2010 Workshop, pages 35–38. Jiaming Luo, Karthik Narasimhan, and Regina Barzilay. 2017. Unsupervised learning of morphological forests. Transactions of the Association for Computational Linguistics, 5:353–364. Justin Mott, Ann Bies, Stephanie Strassel, Jordan Kodner, Caitlin Richter, Hongzhi Xu, and Mitchell Marcus. 2020. Morphological segmentation for low resource languages. In International Conference on Language Resource and Evaluation (LREC). Victor Mugari. 2013. Object marking restrictions on shona causative and applicative constructions. Southern African Linguistics and Applied Language Studies, 31(2):151–160. Karthik Narasimhan, Regina Barzilay, and Tommi Jaakkola. 2015. An unsupervised method for uncovering morphological chains. Transactions of the Association for Computational Linguistics, 3:157–167. Cornelia Parkes, Alexander M. Malek, and Mitchell P. Marcus. 1998. Towards unsupervised extraction of verb paradigms from large corpora. In Proceedings of the Sixth Workshop on Very Large Corpora (COLING-ACL). Hoifung Poon, Colin Cherry, and Kristina Toutanova. 2009. Unsupervised morphological segmentation with log-linear models. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 209–217. Association for Computational Linguistics. 6681 Carl Rubino. 2001. Pangasinan. In Jane Garry and Carl Rubino, editors, Encyclopedia of the World’s Languages: Past and Present, pages 539–542. H.W. Wilson Press, New York / Dublin. Carl Rubino. 2013. Reduplication. In Matthew S. Dryer and Martin Haspelmath, editors, The World Atlas of Language Structures Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. Patrick Schone and Daniel Jurafsky. 2001. Knowledgefree induction of inflectional morphologies. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, pages 1–9. Benjamin Snyder and Regina Barzilay. 2008. Unsupervised multilingual learning for morphological segmentation. In Proceedings of the 46th Annual Meeting on Association for Computational Linguistics, pages 737–745. Radu Soricut and Franz Och. 2015. Unsupervised morphology induction using word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1627–1637. Richard William Sproat. 1992. Morphology and computation. MIT press. Gregory T Stump. 2001. Inflectional morphology: A theory of paradigm structure, volume 93. Cambridge University Press. Sami Virpioja, Peter Smit, Stig-Arne Grönroos, Mikko Kurimo, et al. 2013. Morfessor 2.0: Python implementation and extensions for morfessor baseline. Hongzhi Xu, Mitchell Marcus, Charles Yang, and Lyle Ungar. 2018. Unsupervised morphology learning with statistical paradigms. In Proceedings of the 27th International Conference on Computational Linguistics, pages 44–54.
2020
596